CN116437203A - Automatic focusing method and device, storage medium and monitoring equipment - Google Patents

Automatic focusing method and device, storage medium and monitoring equipment Download PDF

Info

Publication number
CN116437203A
CN116437203A CN202111660291.3A CN202111660291A CN116437203A CN 116437203 A CN116437203 A CN 116437203A CN 202111660291 A CN202111660291 A CN 202111660291A CN 116437203 A CN116437203 A CN 116437203A
Authority
CN
China
Prior art keywords
current
object distance
scene
focusing
tof
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111660291.3A
Other languages
Chinese (zh)
Inventor
吕乾坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN202111660291.3A priority Critical patent/CN116437203A/en
Publication of CN116437203A publication Critical patent/CN116437203A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Automatic Focus Adjustment (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The embodiment of the invention discloses an automatic focusing method, an automatic focusing device, a storage medium and monitoring equipment, wherein the method comprises the following steps: acquiring TOF object distance statistical information of a current scene shot by monitoring equipment; determining focusing weight of an image of the current scene shot by the monitoring equipment based on TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment; determining the focus lens of the monitoring device at each position based on the focus weight, shooting the definition evaluation value of the image of the current scene, and determining the focus position of the focus lens from each position based on each definition evaluation value; the focusing lens is automatically focused to an in-focus position. According to the technical scheme provided by the embodiment of the invention, the focusing weight of the shot image can be determined according to the TOF object distance statistical information, so that the sensitivity of the definition evaluation value is improved, and the focusing effect and the focusing efficiency can be effectively improved.

Description

Automatic focusing method and device, storage medium and monitoring equipment
Technical Field
The embodiment of the invention relates to the technical field of monitoring, in particular to an automatic focusing method, an automatic focusing device, a storage medium and monitoring equipment.
Background
With the rapid development of intelligent computing and intelligent recognition technologies, high-definition and intelligent monitoring devices are becoming popular. At present, a spherical camera is generally adopted as monitoring equipment for intelligent monitoring. The spherical camera can cover different application scenes due to the variable focal length, so the spherical camera is widely used in industries of public security, traffic police and the like.
Autofocus is one of the most basic and important functions of a dome camera, and its focusing effect is a key factor affecting the intelligent monitoring of the traffic level. However, the focusing time of the common spherical camera is long, the focusing effect is poor, and various monitoring requirements cannot be well met. Therefore, how to improve the focusing efficiency and the focusing effect is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention provides an automatic focusing method, an automatic focusing device, a storage medium and monitoring equipment, which can determine the focusing weight of a shot image according to TOF object distance statistical information and can effectively improve the focusing effect and the focusing efficiency.
In a first aspect, an embodiment of the present invention provides an autofocus method, including:
acquiring TOF object distance statistical information of a current scene shot by monitoring equipment;
Determining focusing weight of an image of the current scene shot by the monitoring equipment based on the TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment;
determining the focus positions of a focus lens of the monitoring device based on the focus weights, shooting the definition evaluation values of the images of the current scene, and determining the focus positions of the focus lens from the positions based on the definition evaluation values;
the focusing lens is automatically focused to the in-focus position.
In a second aspect, an embodiment of the present invention further provides an autofocus apparatus, including:
the object distance statistical information acquisition module is used for acquiring TOF object distance statistical information of the current scene shot by the monitoring equipment;
the focusing weight determining module is used for determining the focusing weight of the image of the current scene shot by the monitoring equipment based on the TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment;
a focusing position determining module, configured to determine, based on the focusing weights, positions of a focusing lens of the monitoring device, capture sharpness evaluation values of an image of a current scene, and determine, based on the sharpness evaluation values, a focusing position of the focusing lens from the positions;
And the automatic focusing module is used for automatically focusing the focusing lens to the focusing position.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an autofocus method as provided by embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention provides a monitoring device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements an autofocus method as provided by the embodiment of the present invention when the computer program is executed by the processor.
The embodiment of the invention provides an automatic focusing scheme, which is used for acquiring TOF object distance statistical information of a current scene shot by monitoring equipment; determining focusing weight of an image of the current scene shot by the monitoring equipment based on TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment; determining the focus lens of the monitoring device at each position based on the focus weight, shooting the definition evaluation value of the image of the current scene, and determining the focus position of the focus lens from each position based on each definition evaluation value; the focusing lens is automatically focused to an in-focus position. According to the technical scheme provided by the embodiment of the invention, the focusing weight of the shot image can be determined according to the TOF object distance statistical information, so that the sensitivity of the definition evaluation value is improved, the focusing position of the focusing lens in the monitoring equipment can be accurately determined according to the definition evaluation value of the focusing lens in each position area, and the focusing effect and the focusing efficiency can be effectively improved.
Drawings
FIG. 1 is a flow chart of an auto-focusing method according to an embodiment of the present invention;
FIG. 2 is a graph of sharpness evaluation values according to an embodiment of the present invention;
FIG. 3A is a graph of sharpness evaluation based on global focus weights according to an embodiment of the present invention;
FIG. 3B is a graph of sharpness evaluation based on TOF object distance focusing weights according to an embodiment of the present invention;
FIG. 4 is a graph of sharpness evaluation values based on different frequency bands according to an embodiment of the present invention;
FIG. 5 is a flow chart of an auto-focusing method according to an embodiment of the present invention;
FIG. 6A is a graph of sharpness evaluation values based on global focus weights in a motion scenario according to an embodiment of the present invention;
FIG. 6B is a graph of sharpness evaluation based on TOF object distance focusing weights in a motion scene according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an auto-focusing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a monitoring device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the invention is susceptible of embodiment in the drawings, it is to be understood that the invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the invention. It should be understood that the drawings and embodiments of the invention are for illustration purposes only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by such devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the devices in the embodiments of the present invention are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Fig. 1 is a flowchart of an autofocus method according to an embodiment of the present invention, where the autofocus method is applicable to a case where a focusing lens in a monitoring device performs autofocus, and the method may be performed by an autofocus device, which may be composed of hardware and/or software, and may be generally integrated in the monitoring device. As shown in fig. 1, the method specifically includes the following steps:
step 110, acquiring TOF object distance statistical information of the current scene shot by the monitoring equipment.
The monitoring equipment can comprise cameras in different forms such as spherical cameras, gun-shaped cameras and the like. The current scenario may be understood as a monitoring scenario of the monitoring device at the current moment. TOF (Time of Flight) represents time of flight, TOF object distance statistics can be obtained by TOF sensors. The object distance may refer to the distance of the current scene from the optical center of the focusing lens in the monitoring device. Wherein the focusing lens may be used to focus the current scene. The TOF object distance statistics may refer to statistical information related to object distances acquired by the TOF sensor. Specifically, the modulated near infrared light emitted by the TOF sensor is reflected after encountering the target object, and the TOF sensor can calculate the time of flight or phase difference between light emission and reflection, thereby calculating the distance (i.e. object distance) between the monitoring device and the target object.
It should be noted that the TOF sensor and the image sensor may be configured in the monitoring device. The TOF sensor can be used for acquiring the statistical information of the object distance between the monitoring equipment and the shot object, and the image sensor can be used for acquiring the monitoring image shot by the monitoring equipment. Further, the TOF object distance statistics may include object distance information of each pixel in a monitored image captured by the monitoring device. It can be appreciated that, since the current scene is a three-dimensional stereoscopic scene, there is a depth difference in the monitored image of the current scene, so that the object distances corresponding to the pixels in the monitored image may be different. In addition, if the current scene is a static scene, the focusing lens position is not changed, and at the moment, the object distances corresponding to the pixels in the monitored image of the current scene at different moments are fixed.
In the embodiment of the invention, TOF object distance statistical information can be reflected in the form of TOF histograms. Wherein the TOF histogram statistics may provide TOF object distance statistics of different orders. For example, taking 1024 orders as an example, the correspondence between the TOF object distance statistics and the orders is shown in table 1:
table 1 correspondence table of TOF object distance statistics and orders
Order of Statistical information
0 Pixel object distance<=1m
1 1m<Pixel object distance<=2m
2 2m<Pixel object distance<=3m
3 3m<Pixel object distance<=4m
…… ……
1022 1022m<Pixel object distance<=1023m
1023 Pixel object distance>1023m
Wherein m represents distance unit meter, and the order range is set to 0-1023. It should be noted that the order and the corresponding pixel object distance range may be differentiated based on the lens focal length and the focusing curve characteristics, which are only exemplified in table 1. The histogram segmentation information is not limited at all, and can be differentially subdivided based on the actual object distance and the focal segment of the focusing lens.
Step 120, determining focusing weight of an image of the current scene shot by the monitoring equipment based on TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment.
In the embodiment of the invention, the focus weight can be understood as the weight of the sharpness evaluation value of each region of the image of the current scene shot by the monitoring device. The sharpness evaluation value may be an evaluation index for describing sharpness of an image, may be used for measuring quality of the image, and may be represented in a numerical form. Specifically, if the sharpness evaluation value is larger, it may indicate that the sharpness of the image is higher, and thus it may indicate that the image quality is better. For a general focusing scene, as the focusing lens approaches or moves away from the actual focusing point, the sharpness of the image increases or decreases monotonically accordingly. Fig. 2 is a graph of sharpness evaluation values according to an embodiment of the present invention. Wherein 1 to 9 are respectively different focusing lens positions. As shown in fig. 2, the sharpness evaluation value curve of the scene exhibits a good unimodal property, and the evaluation function values on both sides of the peak monotonically decrease.
Currently, in the related art, a fixed pattern is often used to determine the focus weights of the respective areas. Specifically, the fixed pattern may include a global focus weight, a central focus weight, and a designated area focus weight, and when the monitoring subject is in the central area, the weight value of the corresponding area increases. Illustratively, an 8×8 global focus weight may be set to:
{
1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,
1,1,2,2,2,2,1,1,
1,1,2,8,8,2,1,1,
1,1,2,8,8,2,1,1,
1,1,2,2,2,2,1,1,
1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1
};
the 8 x 8 central focus weight may be set to:
{
0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,
0,0,2,2,2,2,0,0,
0,0,2,8,8,2,0,0,
0,0,2,8,8,2,0,0,
0,0,2,2,2,2,0,0,
0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0
};
the 8×8 specified area focus weight may be set as:
{
0,0,0,0,0,0,0,0,
0,1,1,1,0,0,0,0,
0,1,1,1,0,0,0,0,
0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0
}。
however, the focusing weights of the areas are determined through the fixed mode, the layout situation of the current scene cannot be predicted, the situation that the unimodal of the focusing weights is not obvious or the double peaks appear at multiple object distances due to unreasonable focusing weight distribution can occur, and finally, the focusing time is overlong or even out of focus can be caused, so that the focusing efficiency and the focusing effect of the focusing lens are poor.
In the embodiment of the invention, the image can be divided into areas according to the object distance of each pixel in the image. Specifically, pixels having the same object distance in an image may be divided into one region. For example, if pixel object distances of 1 meter, 2 meters, and 3 meters are known to be included in an image from TOF object distance statistics, a pixel with an object distance of 1 meter may be classified as region one, a pixel with an object distance of 2 meters may be classified as region two, and a pixel with an object distance of 3 meters may be classified as region three. It should be noted that, the first region, the second region and the third region are in parallel relationship, and only for distinguishing the pixel regions corresponding to different object distances, the sequence is not distinguished.
In this embodiment, the current scene image captured by the monitoring device may be divided into a plurality of regions according to the TOF object distance statistics information, and then the focus weight of each region may be determined according to the size of each region (i.e., how many pixels the each region contains). For example, the larger the region, the larger the corresponding focus weight, and conversely, the smaller the region, the smaller the corresponding focus weight. Optionally, determining, based on the TOF object distance statistics, a focus weight of an image of the current scene captured by the monitoring device includes: determining a subject object distance and at least one non-subject object distance of an image of a current scene taken by the monitoring device based on the TOF object distance statistics; the main object distance is the object distance with the highest pixel point occupation ratio in the TOF object distance statistical information; acquiring a first focusing weight of a corresponding region of the object distance of the main body in an image of the current scene shot by the monitoring equipment; for each non-subject object distance, determining a second focus weight of a corresponding region of the current non-subject object distance in an image of the current scene captured by the monitoring device based on the subject object distance, the first focus weight, and the current non-subject object distance. It should be noted that, in the embodiment of the present invention, a specific manner of determining the focus weight of the image of the current scene captured by the monitoring device based on the TOF object distance statistical information is not limited.
Fig. 3A is a graph of sharpness evaluation values based on global focus weights according to an embodiment of the present invention. Fig. 3B is a graph of sharpness evaluation based on TOF object distance focusing weights according to an embodiment of the present invention. As shown in fig. 3A and 3B, the unimodal property and sensitivity of the sharpness evaluation value curve determined based on the TOF object distance focusing weights are significantly better than the sharpness evaluation value curve determined based on the global focusing weights. Unimodal may refer to the characteristic of only one peak in a curve. Sensitivity can be used to measure whether the peak of the curve is significant. Specifically, when the sensitivity is higher, the curve can show a steep and narrow phenomenon, and the smoothness of the curve is better, namely the wave crest is obvious and the interference is smaller; when the curve exhibits multiple peaks, the sensitivity decreases. The unimodal property and sensitivity of the sharpness evaluation value curve have a large influence on the focusing time and focusing accuracy. Specifically, the better the unimodal of the sharpness evaluation value curve is, the higher the focusing accuracy is; the higher the sensitivity of the sharpness evaluation value curve, the shorter the focusing time.
In the embodiment of the invention, the focusing weight of the image of the current scene shot by the monitoring equipment is determined based on the TOF object distance statistical information, the TOF object distance statistical information of the current scene can be firstly obtained, and then the focusing weight of each region in the image of the current scene is determined according to the obtained TOF object distance statistical information, so that the defect caused by a fixed mode can be effectively overcome.
Step 130, determining, based on the focus weights, the focus lens of the monitoring device at each position, capturing sharpness evaluation values of an image of the current scene, and determining, based on each sharpness evaluation value, a focus position of the focus lens from each position.
The in-focus position may be referred to as the most clear lens focus position, i.e. the best focus position of the focusing lens. In the embodiment of the invention, each position can be obtained by moving the focusing lens. Specifically, the focusing lens can be moved according to the set equal step length or unequal step length, and a position can be correspondingly obtained after each movement. The step size is not limited at all, and can be set according to the actual object distance and the focal length of the focusing lens.
In the embodiment of the invention, the images of the current scene can be respectively shot at each position of the focusing lens, and the definition evaluation values of the images at each position can be respectively determined according to the focusing weight. For example, each image may be divided into a plurality of regions for each position image, and the sharpness evaluation value of each region of the image may be determined according to the focus weight, and then the sharpness evaluation values of the regions may be summed to determine the sharpness evaluation value of the entire image. There are various methods for determining the sharpness evaluation value of each region of the image based on the focus weight. For example, the sharpness evaluation value of each region of the image may be determined by a high-frequency component method, or may be determined by using a gradation distribution, edge sharpness, or image details. Specifically, the high-frequency component method can obtain the high-frequency component by passing the image through a high-pass filter according to the principle that the amplitude of the corresponding high-frequency part is larger when the image is clearer. Fig. 4 is a graph of sharpness evaluation values based on different frequency bands according to an embodiment of the present invention. The low-pass filtering corresponds to low-frequency information, and the high-pass filtering corresponds to high-frequency information. As shown in fig. 4, the half-width of the sharpness evaluation value in the high-band curve is significantly smaller than that in the low-band curve, so that the focusing time and focusing accuracy of the high-band are significantly better than those of the low-band. The half width is understood to be the width of the focal position at which half of the maximum sharpness evaluation value is reached.
In addition, a sharpness evaluation value formula may be used to calculate a sharpness evaluation value of the image. Specifically, the following formula may be used to calculate the sharpness evaluation value of each region in the image: FV (FV) n =x×H n +(1-x)×V n The method comprises the steps of carrying out a first treatment on the surface of the Wherein FV n Represents the sharpness evaluation value of the nth region, H n Horizontal filter statistics, V, representing the nth region n Representing the vertical filter statistics for the nth region, x represents the weights set for the horizontal filter statistics. Furthermore, the definition evaluation value of the whole image can be obtained by weighting the definition evaluation value of each region, and the definition evaluation value of the whole image can be calculated by adopting the following formula:
Figure BDA0003449355250000111
wherein FV represents the sharpness evaluation value of the whole image, BLOCKS represents the total number of divided regions of the image, and can be generally configured as 15×17 or 8×8, weight n Representing the focus weight of the nth region. It should be noted that, in the embodiment of the present invention, the determination manner of the sharpness evaluation value is not limited.
In the embodiment of the present invention, after obtaining the sharpness evaluation values of the captured images of the focus lens at the respective positions, the in-focus position of the focus lens may be determined from the respective positions according to the respective sharpness evaluation values. The focus position is understood to be the position at which the sharpness evaluation value reaches the maximum. The maximum sharpness evaluation value can be found out by comparing the sharpness evaluation values, and the corresponding position is determined as the in-focus position of the focusing lens.
Step 140, automatically focusing the focusing lens to the in-focus position.
In the embodiment of the present invention, after determining the in-focus position of the focus lens from among the respective positions based on the sharpness evaluation values of the focus lens at the respective positions, the focus lens may be moved to the in-focus position to complete the auto-focusing of the focus lens.
The embodiment of the invention provides an automatic focusing method, which is used for acquiring TOF object distance statistical information of a current scene shot by monitoring equipment; determining focusing weight of an image of the current scene shot by the monitoring equipment based on TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment; determining the focus lens of the monitoring device at each position based on the focus weight, shooting the definition evaluation value of the image of the current scene, and determining the focus position of the focus lens from each position based on each definition evaluation value; the focusing lens is automatically focused to an in-focus position. According to the technical scheme provided by the embodiment of the invention, the focusing weight of the shot image can be determined according to the TOF object distance statistical information, so that the sensitivity of the definition evaluation value is improved, the focusing position of the focusing lens in the monitoring equipment can be accurately determined according to the definition evaluation value of the focusing lens in each position area, and the focusing effect and the focusing efficiency can be effectively improved.
Fig. 5 is a flowchart of an auto-focusing method according to an embodiment of the present invention, as shown in fig. 5, the method specifically includes the following steps:
step 510, acquiring TOF object distance statistical information of the current scene shot by the monitoring equipment.
Step 520, determining a subject object distance and at least one non-subject object distance of an image of a current scene captured by a monitoring device based on the TOF object distance statistics; the main object distance is the object distance with the highest pixel point occupation ratio in TOF object distance statistical information.
The subject object distance may be understood as the object distance with the highest pixel point occupation ratio in the TOF object distance statistics, and the non-subject object distance may refer to the object distances of pixels except the subject object distance in the TOF object distance statistics. It is understood that an image may include a subject object distance and at least one non-subject object distance.
In the embodiment of the invention, the main object distance and at least one non-main object distance of the image of the current scene shot by the monitoring equipment can be determined according to the TOF object distance statistical information. Specifically, the object distances of all the pixel points can be counted through sorting TOF object distance statistical information, and the pixel points with the same object distance are accumulated to obtain the number of the pixel points corresponding to different object distances. And then determining the object distance with the largest number of pixels as the main object distance, and determining the object distances except the main object distance as the non-main object distance. It can be understood that the more the number of pixels corresponding to an object distance, the higher the duty ratio of the pixels at the object distance in the whole image can be indicated.
In step 530, a first focus weight of a corresponding region of the subject object distance in an image of the current scene captured by the monitoring device is obtained.
The first focusing weight may refer to a focusing weight of a corresponding region pixel point in an image of the current scene shot by the monitoring device. In the embodiment of the invention, the first focusing weight can be set manually according to the actual application requirement, and the first focusing weight can also be obtained according to the functional relation between the object distance of the main body and the focusing weight. It should be noted that, in the embodiment of the present invention, the method for acquiring the first focusing weight is not limited.
Step 540, determining a second focus Jiao Quanchong of a corresponding region of the current non-subject object distance in an image of the current scene captured by the monitoring device based on the subject object distance, the first focus weight, and the current non-subject object distance; wherein the first focus weight is greater than the second focus weight.
The second focusing weight may refer to a focusing weight of a corresponding region pixel point in an image of the current scene shot by the monitoring device from a non-subject object distance. In the embodiment of the invention, for each non-subject object distance, the second focusing weight of the corresponding area of the current non-subject object distance in the image of the current scene shot by the monitoring equipment can be determined according to the subject object distance, the first focusing weight and the current non-subject object distance. For example, the second focus weight can be obtained Jiao Quanchong from a proportional relationship of non-subject object distance to focus weight or from a second focus Jiao Quanchong formula, or can be obtained from a second focus weight by determining the model from the second focus Jiao Quanchong. For example, the subject object distance, the first focus weight, and the current non-subject object distance may be input into a pre-established second focus Jiao Quanchong determination model, with the resulting model output being the second focus weight. It should be noted that, in the embodiment of the present invention, the first focusing weight is greater than the second focusing weight Jiao Quanchong, but the method for obtaining the second focusing weight Jiao Quanchong is not limited in any way.
Optionally, determining the second focus Jiao Quanchong of the corresponding region of the current non-subject object distance in the image of the current scene captured by the monitoring device based on the subject object distance, the first focus weight, and the current non-subject object distance includes: determining a distance deviation degree of the current non-subject object distance relative to the subject object distance based on the subject object distance and the current non-subject object distance; based on the first focus weight and the distance deviation degree, determining a second focus weight of a corresponding region of the current non-subject object distance in an image of the current scene shot by the monitoring device.
The distance deviation degree may refer to a ratio of a difference value between the current non-main object distance and the main object distance to the main object distance, and may be used to represent the degree of deviation of the current non-main object distance from the main object distance. It will be appreciated that the greater the degree of distance deviation, the greater the degree of distance deviation of the current non-subject object distance from the subject object distance may be indicated. In the embodiment of the present invention, the second focus weight Jiao Quanchong may be determined according to the product of the first focus weight and the distance deviation degree, or may be determined according to a functional relationship among the first focus weight, the second focus weight Jiao Quanchong, and the distance deviation degree. For example, the second dimer Jiao Quanchong of the corresponding region of the current non-subject object distance in the image of the current scene taken by the monitoring device may be calculated according to the following formula: w (W) i =W*(1-Abs(D-D i ) D); wherein W is i A second polymer Jiao Quanchong representing a corresponding region of a current non-subject object distance in an image of a current scene taken by the monitoring device, W representing a first focus weight of the subject object distance in the image of the current scene taken by the monitoring device, D representing the subject object distance, D i Representing the current non-subject object distance.
According to the scheme, through the arrangement, the second focusing weight of the corresponding area of the current non-main object distance in the image of the current scene shot by the monitoring equipment can be rapidly determined according to the first focusing weight and the distance deviation degree of the current non-main object distance relative to the main object distance.
In step 550, the focus lens of the monitoring device is determined at each position based on the first focus weight and the second focus weight Jiao Quanchong, a sharpness evaluation value of an image of the current scene is photographed, and a focus position of the focus lens is determined from each position based on each sharpness evaluation value.
Step 560, automatically focusing the focusing lens to the in-focus position.
The embodiment of the invention provides an automatic focusing method, which is used for determining a main object distance and at least one non-main object distance of an image of a current scene shot by monitoring equipment based on TOF object distance statistical information, wherein the main object distance is the object distance with the highest pixel point occupation ratio in the TOF object distance statistical information, so that a first focusing weight of a corresponding area of the main object distance in the image of the current scene shot by the monitoring equipment is obtained, and aiming at each non-main object distance, a second focusing weight Jiao Quanchong of the corresponding area of the current non-main object distance in the image of the current scene shot by the monitoring equipment is determined based on the main object distance, the first focusing weight and the current non-main object distance, and the focusing weight of the non-main object distance can be determined according to the focusing weight of the main object distance.
In some embodiments, determining focus positions of a focus lens of a monitoring device at respective positions based on focus weights, capturing sharpness evaluation values of an image of a current scene, and determining in-focus positions of the focus lens from the respective positions based on the respective sharpness evaluation values, includes: based on a hill climbing type searching algorithm, moving a focusing lens of the monitoring equipment to each position, shooting an image of a current scene when the focusing lens of the monitoring equipment is positioned at each position, and determining a definition evaluation value corresponding to the image based on a focusing weight until the definition evaluation value is reduced for the first time; and taking the corresponding position when the definition evaluation value is maximum as the focusing position of the focusing lens.
The hill-climbing search algorithm is a local preferential method, and can utilize feedback information to help generate a decision of a solution. Specifically, from the current node, a comparison is made with the values of neighboring nodes. And if the current node value is maximum, returning the current node as the maximum value, otherwise, replacing the current node with the maximum adjacent node value, and circulating until the highest point is reached.
In the embodiment of the invention, the focusing lens of the monitoring equipment can be moved to each position, an image of a current scene is shot at each position, the definition evaluation value corresponding to the image is determined based on the focusing weight, and meanwhile, the mountain climbing type searching algorithm can start searching according to the preset searching step length to judge whether the definition evaluation value of the current position is larger than the definition evaluation value of the previous position. If yes, replacing the definition evaluation value of the previous position with the definition evaluation value of the current position, then continuously moving the focusing lens, and repeating the shooting image, determining the definition evaluation value and the mountain climbing searching process; otherwise, it can be stated that the sharpness evaluation value starts to decline, that is, the sharpness evaluation value declines for the first time, at this time, the search is stopped, and the position corresponding to the maximum sharpness evaluation value is taken as the focusing position of the focusing lens. Optionally, after stopping the search, a second search may be performed with a smaller step size between the position corresponding to the maximum sharpness evaluation value and the position corresponding to the first decrease in sharpness evaluation value, and the position corresponding to the maximum sharpness evaluation value is used as the focusing position of the focusing lens, so that the focusing position of the focusing lens can be determined more accurately.
Through the arrangement, the maximum definition evaluation value can be quickly searched according to the preset searching step length through the mountain climbing type searching algorithm, and the corresponding position is used as the focusing position of the focusing lens, so that the focusing position is quickly positioned.
In some embodiments, in determining that the focus lens of the monitoring device is at each position based on the focus weight, the process of capturing the sharpness evaluation value of the image of the current scene further includes: aiming at each position, when the focusing lens is at the current position, the monitoring equipment shoots the current TOF object distance statistical information corresponding to the current scene; comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, and judging whether the current shooting scene is a motion scene or not according to the change difference value of the object distance statistical information; the monitoring equipment shoots TOF object distance statistical information corresponding to a previous scene when the focusing lens is at the previous position; when the current shooting scene is determined to be a motion scene, the focusing weight of the current image of the current shooting scene is adjusted when the focusing lens of the monitoring device is at the current position.
In the embodiment of the invention, whether the current shooting scene is a motion scene can be judged according to the comparison result of the current TOF object distance statistical information and the last TOF object distance statistical information. Specifically, the scene may be judged according to the difference value of the object distance of the pixel point at the same pixel position between the current TOF object distance statistics time and the previous TOF object distance statistics time, or may be judged according to the specific gravity of the change of the object distance of the pixel point at the corresponding position between the current TOF object distance statistics time and the previous TOF object distance statistics time. The current TOF object distance statistical time is acquired by the current TOF object distance statistical information, and the previous TOF object distance statistical time is acquired by the previous TOF object distance statistical information. The previous TOF object distance statistics information may be understood as TOF object distance statistics information corresponding to a previous scene shot by the monitoring device when the focusing lens is at a previous position. For example, the difference of the object distance between the current TOF object distance statistics time and the last TOF object distance statistics time can be calculated for the pixel points at the same position, and the difference of the object distance is compared with a preset object distance threshold. If the difference value of the object distances is larger than the preset object distance threshold value, the current scene can be judged to be a motion scene, otherwise, the current scene can be judged to be a non-motion scene.
In the embodiment of the invention, after the current shooting scene is determined to be a motion scene, the focusing weight of the focusing lens of the monitoring device for shooting the current image of the current scene at the current position can be adjusted, and the definition evaluation value of the focusing lens of the monitoring device for shooting the current image of the current scene at the current position can be determined according to the adjusted focusing weight. For example, the focus weight of the current image may be adjusted according to a preset focus weight adjustment rule, or the focus weight of the current image may be adjusted by a preset focus weight adjustment model. It should be noted that, the embodiment of the present invention does not limit the focusing weight adjustment manner.
Fig. 6A is a graph of a sharpness evaluation value based on global focusing weights in a moving scene according to an embodiment of the present invention, and fig. 6B is a graph of a sharpness evaluation value based on TOF object distance focusing weights in a moving scene according to an embodiment of the present invention. As shown in fig. 6A, there are only two peaks in the sharpness evaluation value curve determined based on the TOF object distance focusing weight in the motion scene, and the difference between the peaks is large and easy to distinguish, so the curve has better unimodal property. As shown in fig. 6B, the sharpness evaluation value curve based on the global focus weight simultaneously has a plurality of peaks, and the difference between the peaks is small and is not easy to distinguish, so that the uniqueness of the curve is poor.
According to the scheme, through the arrangement, the focusing weight of the monitoring image in the moving scene can be adaptively adjusted, so that the reasonable arrangement of the focusing weight of the image in the moving scene is realized, the focusing precision in the moving scene can be improved, and the focusing effect in the moving scene can be ensured.
Optionally, the motion scene includes a normal motion scene and a tracking scene; correspondingly, comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, and judging whether the current shooting scene is a motion scene according to the change difference value of the object distance statistical information, wherein the method comprises the following steps: comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, and determining the number of target pixel points of which the TOF object distance change difference value is larger than a first preset TOF object distance change difference value; calculating the ratio of the number of target pixels to the number of total pixels corresponding to the current TOF object distance statistical information; when the ratio is larger than the first preset ratio and smaller than the second preset ratio, determining that the current shooting scene is a common motion scene; and when the ratio is larger than a second preset ratio, determining the current shooting scene as a tracking scene.
The common motion scene may refer to a motion scene in which the current scene picture moves and the monitoring device does not move. Tracking a scene may refer to a motion scene where both the current scene view and the monitoring device may be moving. Specifically, the monitoring direction can be changed through the rotating holder of the monitoring device, or the scene images gradually far away from the monitoring device can be tracked through the zooming and amplifying actions of the monitoring device. In the embodiment of the invention, the type of the motion scene can be judged according to the comparison result of the TOF object distance difference value and the preset TOF object distance difference value of the same pixel at the current TOF object distance statistical moment and the last TOF object distance statistical moment. The preset TOF object distance difference may be a preset pixel object distance difference of the same pixel at two adjacent TOF object distance statistical moments. Specifically, by comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, the number of target pixels with TOF object distance variation difference larger than the first preset TOF object distance variation difference can be obtained, and the ratio of the number of target pixels to the total number of pixels corresponding to the current TOF object distance statistical information is calculated. Comparing the ratio with a first preset ratio and a second preset ratio, and judging that the current shooting scene is a common motion scene if the ratio is larger than the first preset ratio and smaller than the second preset ratio; if the ratio is larger than the second preset ratio, the current shooting scene can be judged to be the tracking scene. The preset ratio may be a ratio of a preset number of target pixels to a total number of pixels corresponding to the TOF object distance statistics. It should be noted that, in the embodiment of the present invention, the preset ratio is not limited, but the second preset ratio is satisfied to be larger than the first preset ratio.
According to the scheme, through the arrangement, the type of the motion scene can be judged according to the TOF object distance difference value of the same pixel at the current TOF object distance statistical moment and the last TOF object distance statistical moment.
Optionally, comparing the current TOF object distance statistic information with the previous TOF object distance statistic information, and judging whether the current shooting scene is a motion scene according to a variation difference value of the object distance statistic information, including: determining a corresponding current subject object distance based on the current TOF object distance statistics and determining a corresponding last subject object distance based on the last TOF object distance statistics; when the difference value of the current main body object distance and the last main body object distance is larger than the second preset TOF object distance difference value, determining that the current shooting scene is a common motion scene; or determining the number of first pixels occupied by the corresponding current main body object distance based on the current TOF object distance statistical information, and determining the number of second pixels occupied by the corresponding last main body object distance based on the last TOF object distance statistical information; and when the difference value between the number of the first pixel points and the number of the second pixel points is larger than the difference value of the preset pixel points, determining that the current shooting scene is a common motion scene.
In the embodiment of the invention, whether the current motion scene is a common motion scene can be judged according to the difference value of the main object distance between the current TOF object distance statistical moment and the last TOF object distance statistical moment. Specifically, the current main body object distance and the last main body object distance can be determined according to the current TOF object distance statistical information and the last TOF object distance statistical information respectively, and the difference between the current main body object distance and the last main body object distance is calculated. And comparing the difference with a second preset TOF object distance difference, if the difference is larger than the second preset TOF object distance difference, judging that the scene is a common motion scene, otherwise, judging that the scene is a non-common motion scene. And judging whether the current motion scene is a common motion scene or not according to the pixel point quantity difference value occupied by the main object distance in the current TOF object distance statistic time and the last TOF object distance statistic time. Specifically, the number of first pixels occupied by the current object distance of the main body and the number of second pixels occupied by the previous object distance of the main body can be respectively determined according to the current TOF object distance statistical information and the previous TOF object distance statistical information, and the difference value between the number of first pixels and the number of second pixels is calculated. The first pixel number and the second pixel number can be respectively understood as the pixel number occupied by the main object distance in the current TOF object distance statistic information and the last TOF object distance statistic information. And comparing the difference value with a preset pixel point difference value. If the difference is larger than the preset pixel point difference, the common motion scene can be judged, otherwise, the non-common motion scene is judged.
Optionally, when determining that the current shooting scene is a motion scene, adjusting the focusing weight of the current image of the current shooting scene by the focusing lens of the monitoring device at the current position, including: when the current shooting scene is determined to be a common motion scene, setting the focusing lens of the monitoring device at the current position, and setting the focusing weight of a first target area in a current image of the current scene to zero; maintaining the focusing weight of a second target area in the current image unchanged, or updating the focusing weight of the second target area based on the current TOF object distance statistical information corresponding to the second target area of the current image; the first target area is a moving target area in the current image, and the second target area is a non-moving target area in the current image.
The moving target region may refer to a region where a moving target is located, and the non-moving target region may refer to a region where a non-moving target is located. In the embodiment of the invention, the focusing weight can be correspondingly adjusted according to the type of the motion scene. Specifically, if the motion scene is a normal motion scene, the focusing weight of the first target area in the current image of the current scene shot by the focusing lens of the monitoring device at the current position can be set to zero, and the focusing weight of the second target area in the current image is kept unchanged or updated based on the current TOF object distance statistical information corresponding to the second target area of the current image. Taking a road passing scene as an example, the change of object distance statistical information of the scene shows a fluctuation trend, the fluctuation area is fixed, the percentage of the fluctuation area is smaller than a certain threshold value, the weight of the definition evaluation value of the moving target area (namely the first target area) can be cleared, the definition evaluation value is calculated only according to the non-moving target area (namely the second target area), namely, the road surface and other static objects are taken as focusing weight areas, and therefore the definition evaluation value calculation interference caused by movement can be shielded.
Optionally, when determining that the current shooting scene is a motion scene, adjusting the focusing weight of the current image of the current shooting scene by adjusting the front position of the focusing lens of the monitoring device, including: when the current shooting scene is determined to be a tracking scene, setting the focusing lens of the monitoring device at the current position, and setting the focusing weight of a second target area in the current image of the current scene to zero; maintaining the focusing weight of the first target area in the current image unchanged, or updating the focusing weight of the first target area based on the current TOF object distance statistical information corresponding to the first target area of the current image; the first target area is a moving target area in the current image, and the second target area is a non-moving target area in the current image.
In the embodiment of the invention, if the motion scene is a tracking scene, the focusing weight of the second target area in the current image of the current scene shot by the focusing lens of the monitoring device at the current position can be set to zero, and the focusing weight of the first target area in the current image is kept unchanged or updated based on the current TOF object distance statistical information corresponding to the first target area of the current image. Since the tracking scene aims at tracking, the statistical information of the object distance changes relatively high, but the change is non-fluctuation change caused by the interference of the moving object, and the original object distance information is restored immediately after jumping for a few frames. It should be noted that the focus weight adjustment strategy of the tracking scene is just opposite to that of the normal motion scene. Specifically, the tracking scene is to clear the focusing weight corresponding to the area (i.e. the second target area) corresponding to the non-subject object distance, and only perform weight calculation on the area (i.e. the first target area) corresponding to the subject object distance or keep the same.
Through such setting, the scheme can adaptively focus weight adjustment to different motion scenes, thereby improving the focusing precision under the motion scenes and ensuring the focusing effect under the motion scenes.
Fig. 7 is a schematic structural diagram of an autofocus device according to an embodiment of the present invention. As shown in fig. 7, the apparatus includes: an object distance statistics acquisition module 710, a focus weight determination module 720, a focus position determination module 730, and an autofocus module 740. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the object distance statistical information acquisition module 710 is configured to acquire TOF object distance statistical information of a current scene shot by the monitoring device;
a focus weight determining module 720, configured to determine a focus weight of an image of the current scene captured by the monitoring device based on the TOF object distance statistics; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment;
a focus position determining module 730, configured to determine, based on the focus weights, positions of a focus lens of the monitoring device, capture sharpness evaluation values of an image of a current scene, and determine, based on the sharpness evaluation values, a focus position of the focus lens from the positions;
An autofocus module 740 for autofocus the focusing lens to the in-focus position.
The embodiment of the invention provides an automatic focusing device, which is used for acquiring TOF object distance statistical information of a current scene shot by monitoring equipment; determining focusing weight of an image of the current scene shot by the monitoring equipment based on TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment; determining the focus lens of the monitoring device at each position based on the focus weight, shooting the definition evaluation value of the image of the current scene, and determining the focus position of the focus lens from each position based on each definition evaluation value; the focusing lens is automatically focused to an in-focus position. According to the technical scheme provided by the embodiment of the invention, the focusing weight of the shot image can be determined according to the TOF object distance statistical information, so that the sensitivity of the definition evaluation value is improved, the focusing position of the focusing lens in the monitoring equipment can be accurately determined according to the definition evaluation value of the focusing lens in each position area, and the focusing effect and the focusing efficiency can be effectively improved.
Optionally, the focusing weight determining module 720 includes:
An object distance determining unit, configured to determine a subject object distance and at least one non-subject object distance of an image of a current scene captured by the monitoring device based on the TOF object distance statistical information; the main object distance is the object distance with the highest pixel point occupation ratio in the TOF object distance statistical information;
a first focusing weight acquisition unit, configured to acquire a first focusing weight of a corresponding region of the subject object distance in an image of the current scene shot by the monitoring device;
a second dimer Jiao Quanchong determining unit configured to determine a second dimer Jiao Quanchong of a corresponding region of a current non-subject object distance in an image of a current scene captured by the monitoring device, based on the subject object distance, the first focus weight, and the current non-subject object distance; wherein the first focus weight is greater than the second focus weight.
Optionally, the second dimer Jiao Quanchong determining unit is configured to:
determining a distance deviation degree of the current non-subject object distance relative to the subject object distance based on the subject object distance and the current non-subject object distance;
and determining a second focusing weight of a corresponding region of the current non-subject object distance in an image of the current scene shot by the monitoring equipment based on the first focusing weight and the distance deviation degree.
Optionally, the focal position determining module 730 is configured to:
based on a hill climbing search algorithm, moving a focusing lens of the monitoring equipment to each position, shooting an image of a current scene when the focusing lens of the monitoring equipment is positioned at each position, and determining a definition evaluation value corresponding to the image based on the focusing weight until the definition evaluation value is reduced for the first time;
and taking the corresponding position when the definition evaluation value is maximum as the focusing position of the focusing lens.
Optionally, the focal position determining module 730 includes:
the object distance statistical information acquisition unit is used for acquiring the current TOF object distance statistical information corresponding to the current scene shot by the monitoring equipment according to each position when the focusing lens is at the current position in the process of determining the positions of the focusing lens of the monitoring equipment and shooting the definition evaluation value of the image of the current scene based on the focusing weight;
the motion scene judging unit is used for comparing the current TOF object distance statistical information with the previous TOF object distance statistical information and judging whether the current shooting scene is a motion scene or not according to the change difference value of the object distance statistical information; the monitoring equipment shoots TOF object distance statistical information corresponding to a previous scene when the focusing lens is at a previous position;
The focusing weight adjusting unit is used for adjusting the focusing weight of the current image of the current shooting scene in the current position of the focusing lens of the monitoring equipment when the current shooting scene is determined to be a motion scene;
accordingly, the focal position determining module 730 is configured to:
and determining the current position of a focusing lens of the monitoring equipment based on the adjusted focusing weight, and shooting a definition evaluation value of a current image of the current scene.
Optionally, the motion scene determining unit includes:
comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, and determining the number of target pixel points of which the TOF object distance change difference value is larger than a first preset TOF object distance change difference value;
calculating the ratio of the number of the target pixel points to the number of the total pixel points corresponding to the current TOF object distance statistical information;
when the ratio is larger than a first preset ratio and smaller than a second preset ratio, determining that the current shooting scene is a common motion scene; and when the ratio is larger than the second preset ratio, determining that the current shooting scene is a tracking scene.
Optionally, the motion scene determining unit includes:
Determining a corresponding current subject object distance based on the current TOF object distance statistics and determining a corresponding last subject object distance based on the last TOF object distance statistics; when the difference value of the current main body object distance and the last main body object distance is larger than a second preset TOF object distance difference value, determining that the current shooting scene is a common motion scene; or alternatively, the process may be performed,
determining the number of first pixels occupied by the corresponding current main object distance based on the current TOF object distance statistical information, and determining the number of second pixels occupied by the corresponding last main object distance based on the last TOF object distance statistical information; and when the difference value between the number of the first pixel points and the number of the second pixel points is larger than the preset pixel point difference value, determining that the current shooting scene is a common motion scene.
Optionally, the focus weight adjustment unit is configured to:
when the current shooting scene is determined to be a common motion scene, setting the focusing weight of a first target area in a current image of the current shooting scene to be zero at the current position by a focusing lens of the monitoring equipment; maintaining the focusing weight of a second target area in the current image unchanged, or updating the focusing weight of the second target area based on the current TOF object distance statistical information corresponding to the second target area of the current image;
The first target area is a moving target area in the current image, and the second target area is a non-moving target area in the current image.
Optionally, the focus weight adjustment unit is configured to:
when the current shooting scene is determined to be a tracking scene, setting the focusing lens of the monitoring device at the current position, and setting the focusing weight of a second target area in a current image of the current scene to zero; maintaining the focusing weight of a first target area in the current image unchanged, or updating the focusing weight of the first target area based on the current TOF object distance statistical information corresponding to the first target area of the current image;
the first target area is a moving target area in the current image, and the second target area is a non-moving target area in the current image.
The device can execute the method provided by all the embodiments of the invention, and has the corresponding functional modules and beneficial effects of executing the method. Technical details not described in detail in the embodiments of the present invention can be found in the methods provided in all the foregoing embodiments of the present invention.
Embodiments of the present invention also provide a storage medium containing computer-executable instructions for performing the autofocus methods provided by embodiments of the present invention when executed by a computer processor.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory, such as DRAM, DDRRAM, SRAM, EDORAM, rambus (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a second, different computer system connected to the first computer system through a network such as the internet. The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing computer-executable instructions provided in the embodiments of the present invention is not limited to the autofocus operations described above, and may also perform the related operations in the autofocus methods provided in any of the embodiments of the present invention.
The embodiment of the invention provides a monitoring device, and the automatic focusing device provided by the embodiment of the invention can be integrated in the monitoring device. Fig. 8 is a schematic structural diagram of a monitoring device according to an embodiment of the present invention. The monitoring device 800 may include: a memory 801, a processor 802 and a computer program stored on the memory 801 and executable by the processor, the processor 802 implementing the autofocus method according to embodiments of the invention when executing the computer program.
The monitoring equipment provided by the embodiment of the invention acquires TOF object distance statistical information of the current scene shot by the monitoring equipment; determining focusing weight of an image of the current scene shot by the monitoring equipment based on TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment; determining the focus lens of the monitoring device at each position based on the focus weight, shooting the definition evaluation value of the image of the current scene, and determining the focus position of the focus lens from each position based on each definition evaluation value; the focusing lens is automatically focused to an in-focus position. According to the technical scheme provided by the embodiment of the invention, the focusing weight of the shot image can be determined according to the TOF object distance statistical information, so that the sensitivity of the definition evaluation value is improved, the focusing position of the focusing lens in the monitoring equipment can be accurately determined according to the definition evaluation value of the focusing lens in each position area, and the focusing effect and the focusing efficiency can be effectively improved.
The automatic focusing device, the storage medium and the monitoring device provided in the above embodiments can execute the automatic focusing method provided in any embodiment of the present invention, and have the corresponding functional modules and beneficial effects of executing the method. Technical details not described in detail in the above embodiments may be found in the autofocus method provided by any of the embodiments of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (11)

1. An autofocus method comprising:
acquiring TOF object distance statistical information of a current scene shot by monitoring equipment;
determining focusing weight of an image of the current scene shot by the monitoring equipment based on the TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment;
Determining the focus positions of a focus lens of the monitoring device based on the focus weights, shooting the definition evaluation values of the images of the current scene, and determining the focus positions of the focus lens from the positions based on the definition evaluation values;
the focusing lens is automatically focused to the in-focus position.
2. The method of claim 1, wherein determining focus weights for images of the current scene taken by the monitoring device based on the TOF object distance statistics comprises:
determining a subject object distance and at least one non-subject object distance of an image of a current scene taken by the monitoring device based on the TOF object distance statistics; the main object distance is the object distance with the highest pixel point occupation ratio in the TOF object distance statistical information;
acquiring a first focusing weight of a corresponding region of the object distance of the main body in an image of the current scene shot by the monitoring equipment;
determining a second focus Jiao Quanchong of a corresponding region of a current non-subject object distance in an image of a current scene captured by the monitoring device based on the subject object distance, the first focus weight, and the current non-subject object distance; wherein the first focus weight is greater than the second focus weight.
3. The method of claim 2, wherein determining a second focus Jiao Quanchong of a corresponding region of the current non-subject object distance in an image of the monitoring device capturing the current scene based on the subject object distance, the first focus weight, and the current non-subject object distance comprises:
determining a distance deviation degree of the current non-subject object distance relative to the subject object distance based on the subject object distance and the current non-subject object distance;
and determining a second focusing weight of a corresponding region of the current non-subject object distance in an image of the current scene shot by the monitoring equipment based on the first focusing weight and the distance deviation degree.
4. The method according to claim 1, wherein in determining that the focus lens of the monitoring device is at each position based on the focus weight, capturing a sharpness evaluation value of an image of a current scene, further comprising:
for each position, when the focusing lens is at the current position, the monitoring equipment shoots the current TOF object distance statistical information corresponding to the current scene;
comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, and judging whether the current shooting scene is a motion scene or not according to the change difference value of the object distance statistical information; the monitoring equipment shoots TOF object distance statistical information corresponding to a previous scene when the focusing lens is at a previous position;
And when the current shooting scene is determined to be a motion scene, adjusting the focusing weight of the current image of the current shooting scene at the current position of the focusing lens of the monitoring equipment.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, and judging whether the current shooting scene is a motion scene according to the change difference value of the object distance statistical information, wherein the method comprises the following steps:
comparing the current TOF object distance statistical information with the previous TOF object distance statistical information, and determining the number of target pixel points of which the TOF object distance change difference value is larger than a first preset TOF object distance change difference value;
calculating the ratio of the number of the target pixel points to the number of the total pixel points corresponding to the current TOF object distance statistical information;
when the ratio is larger than a first preset ratio and smaller than a second preset ratio, determining that the current shooting scene is a common motion scene; and when the ratio is larger than the second preset ratio, determining that the current shooting scene is a tracking scene.
6. The method of claim 4, wherein comparing the current TOF object distance statistic with a previous TOF object distance statistic, and determining whether the current shooting scene is a motion scene based on a difference in changes in object distance statistic, comprises:
Determining a corresponding current subject object distance based on the current TOF object distance statistics and determining a corresponding last subject object distance based on the last TOF object distance statistics; when the difference value of the current main body object distance and the last main body object distance is larger than a second preset TOF object distance difference value, determining that the current shooting scene is a common motion scene; or alternatively, the process may be performed,
determining the number of first pixels occupied by the corresponding current main object distance based on the current TOF object distance statistical information, and determining the number of second pixels occupied by the corresponding last main object distance based on the last TOF object distance statistical information; and when the difference value between the number of the first pixel points and the number of the second pixel points is larger than the preset pixel point difference value, determining that the current shooting scene is a common motion scene.
7. The method according to claim 5 or 6, wherein when determining that the current shooting scene is a moving scene, adjusting the focus weight of the current image of the current scene, with the focus lens of the monitoring device at the current position, comprises:
when the current shooting scene is determined to be a common motion scene, setting the focusing weight of a first target area in a current image of the current shooting scene to be zero at the current position by a focusing lens of the monitoring equipment; maintaining the focusing weight of a second target area in the current image unchanged, or updating the focusing weight of the second target area based on the current TOF object distance statistical information corresponding to the second target area of the current image;
The first target area is a moving target area in the current image, and the second target area is a non-moving target area in the current image.
8. The method of claim 5, wherein adjusting the focus weight of the current image of the current scene taken at the current position of the focus lens of the monitoring device when the current taken scene is determined to be a moving scene comprises:
when the current shooting scene is determined to be a tracking scene, setting the focusing lens of the monitoring device at the current position, and setting the focusing weight of a second target area in a current image of the current scene to zero; maintaining the focusing weight of a first target area in the current image unchanged, or updating the focusing weight of the first target area based on the current TOF object distance statistical information corresponding to the first target area of the current image;
the first target area is a moving target area in the current image, and the second target area is a non-moving target area in the current image.
9. An autofocus device comprising:
the object distance statistical information acquisition module is used for acquiring TOF object distance statistical information of the current scene shot by the monitoring equipment;
The focusing weight determining module is used for determining the focusing weight of the image of the current scene shot by the monitoring equipment based on the TOF object distance statistical information; the focusing weight is the weight of the definition evaluation value of each region of the image of the current scene shot by the monitoring equipment;
a focusing position determining module, configured to determine, based on the focusing weights, positions of a focusing lens of the monitoring device, capture sharpness evaluation values of an image of a current scene, and determine, based on the sharpness evaluation values, a focusing position of the focusing lens from the positions;
and the automatic focusing module is used for automatically focusing the focusing lens to the focusing position.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processing means, implements the autofocus method as claimed in any one of claims 1-8.
11. A monitoring device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the auto-focusing method according to any one of claims 1-8 when executing the computer program.
CN202111660291.3A 2021-12-31 2021-12-31 Automatic focusing method and device, storage medium and monitoring equipment Pending CN116437203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111660291.3A CN116437203A (en) 2021-12-31 2021-12-31 Automatic focusing method and device, storage medium and monitoring equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111660291.3A CN116437203A (en) 2021-12-31 2021-12-31 Automatic focusing method and device, storage medium and monitoring equipment

Publications (1)

Publication Number Publication Date
CN116437203A true CN116437203A (en) 2023-07-14

Family

ID=87091225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111660291.3A Pending CN116437203A (en) 2021-12-31 2021-12-31 Automatic focusing method and device, storage medium and monitoring equipment

Country Status (1)

Country Link
CN (1) CN116437203A (en)

Similar Documents

Publication Publication Date Title
CN107888819B (en) Automatic focusing method and device
US9307134B2 (en) Automatic setting of zoom, aperture and shutter speed based on scene depth map
CN107258077B (en) System and method for Continuous Auto Focus (CAF)
US9019426B2 (en) Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data
US9501834B2 (en) Image capture for later refocusing or focus-manipulation
TW202036461A (en) System for disparity estimation and method for disparity estimation of system
TWI482468B (en) Device, method and computer readable storage medium thereof for detecting object
CN108076278A (en) A kind of Atomatic focusing method, device and electronic equipment
CN113572958B (en) Method and equipment for automatically triggering camera to focus
JP2009187397A (en) Image processor and image processing program
CN110753178B (en) Exposure time adjusting method and device and camera
CN108810415B (en) Focusing method based on quantum particle swarm optimization algorithm
CN105049723A (en) Auto-focusing method based on defocus distance difference qualitative analysis
CN114782412A (en) Image detection method, and training method and device of target detection model
CN115103120A (en) Shooting scene detection method and device, electronic equipment and storage medium
CN110458017B (en) Target tracking scale estimation method and related device
CN114998441A (en) Adaptive adjustment method and device for tripod head lamp of unmanned aerial vehicle, electronic equipment and storage medium
CN116567412A (en) Zoom tracking curve determination method, apparatus, computer device and storage medium
CN116437203A (en) Automatic focusing method and device, storage medium and monitoring equipment
US11985421B2 (en) Device and method for predicted autofocus on an object
CN111212238A (en) Contrast focusing method, system, equipment and storage medium under point light source scene
CN116980757A (en) Quick focusing method, focusing map updating method, device and storage medium
US11631183B2 (en) Method and system for motion segmentation
CN112637506A (en) Method for accelerating focusing of mobile phone
CN111767757B (en) Identity information determining method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination