CN110572577B - Method, device, equipment and medium for tracking and focusing - Google Patents
Method, device, equipment and medium for tracking and focusing Download PDFInfo
- Publication number
- CN110572577B CN110572577B CN201910905486.6A CN201910905486A CN110572577B CN 110572577 B CN110572577 B CN 110572577B CN 201910905486 A CN201910905486 A CN 201910905486A CN 110572577 B CN110572577 B CN 110572577B
- Authority
- CN
- China
- Prior art keywords
- object distance
- frame
- current frame
- focal length
- difference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/671—Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a method, a device, equipment and a medium for tracking and focusing, which are used for solving the problem of untimely focusing in the tracking process. The method comprises the following steps: receiving the object distance between the image acquisition equipment and a measured object measured by the radar for the current frame, and determining the object distance difference between the current frame and every two adjacent frames in a preset number of image frames before the current frame; predicting the object distance of the next frame according to the object distance difference of every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value; and adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame. Because the object distance of the next frame is predicted by using the linear prediction model according to the current frame and the object distance difference of every two adjacent frames in the preset number of image frames before the current frame, the algorithm is simple, no overlarge calculation amount exists, the focal length can be adjusted in advance according to the predicted object distance, the focusing speed meets the real-time requirement, and the focusing effect is ensured.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a medium for tracking focus.
Background
Along with falling to the ground of intelligent scheme in the security protection field, at the in-process of video tracking object, the definition that requires more and more to the formation of image of object.
In the prior art, when focusing on a tracked object, a method for enhancing FV weights of a target area and a method for establishing a background model are generally adopted. The method for strengthening the FV weight of the target area is to set the target area during tracking, make the proportion of the FV value of the target area greater than that of the FV value of the non-target area, when the moving distance of the target exceeds a certain threshold or the target area becomes fuzzy, trigger a focusing, drive the focusing lens to move, then make a search ending judgment, and finally complete the focusing. In the tracking process, only when the position of the tracked target object deviates obviously or the image becomes fuzzy obviously, the method carries out automatic focusing, so that the whole process of the target in the tracking process cannot be clear, and when the scene becomes complex, the tracked target object is always in a fuzzy state due to untimely focusing or unclear focusing.
The method for establishing the background model is to select a modeling method to complete background modeling of the source image, detect the dynamic target according to the background model and then focus the tracked target by using a focusing algorithm. However, a large amount of training data is needed for establishing the model, and the background model needs to be updated in time, so that a large amount of space and time are needed, and the problem of untimely focusing is easily caused.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for tracking and focusing, which are used for solving the problem of untimely focusing in the tracking process.
The embodiment of the invention provides a method for tracking and focusing, which comprises the following steps:
receiving the object distance between image acquisition equipment and a measured object measured by a radar for a current frame, and determining the object distance difference between the current frame and every two adjacent frames in a preset number of image frames before the current frame;
predicting the object distance of the next frame according to the object distance difference of every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
Further, after predicting the object distance of the next frame, the method further comprises:
updating the prediction times stored in the current iteration period;
judging whether the updated prediction times are larger than a preset number threshold value or not;
if yes, updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
Further, the updating of the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model includes:
calculating the object distance difference corresponding to each two adjacent image frames;
and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
Further, after predicting the next frame object distance and before adjusting the focal length of the image acquisition device according to the predicted next frame object distance, the method comprises:
judging whether the difference value of the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is within the focal depth range corresponding to the current frame;
if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
Further, if the difference is not within the depth-of-focus range corresponding to the current frame, the method further includes:
determining the range of the adjusted focal length according to the focal length corresponding to the current frame and the focal depth range;
and adjusting the focal length of the image acquisition equipment according to any value in the range.
Further, predicting the object distance of the next frame according to the object distance difference between every two adjacent frames, the currently stored parameter values corresponding to each object distance difference, and the linear prediction models corresponding to the parameter values includes:
determining the object distance difference between the next frame and the current frame according to the object distance difference between every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
An embodiment of the present invention further provides a device for tracking and focusing, including:
the statistical data module is used for receiving the object distance between the image acquisition equipment measured by the radar for the current frame and the measured object, and determining the object distance difference between the current frame and every two adjacent frames in the preset number of image frames before the current frame;
the prediction module is used for predicting the object distance of the next frame according to the object distance difference of each two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and the focusing module is used for adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
Further, the apparatus further comprises:
and the updating module is used for updating the prediction times stored in the current iteration cycle after the prediction module predicts the object distance of the next frame, judging whether the updated prediction times are larger than a preset number threshold value, and if so, updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
Further, the updating module is specifically configured to calculate an object distance difference corresponding to each two adjacent image frames; and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
Further, the focusing module is further configured to determine whether a difference between a focal length corresponding to the predicted object distance and a focal length corresponding to the current frame is within a focal depth range corresponding to the current frame; if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
Further, the focusing module is further configured to determine a range in which the adjusted focal length is located according to the focal length corresponding to the current frame and the focal depth range if the difference is not within the focal depth range corresponding to the current frame; and adjusting the focal length of the image acquisition equipment according to any value in the range.
Further, the prediction module is specifically configured to determine an object distance difference between the next frame and the current frame according to the object distance difference between each two adjacent frames, each currently stored parameter value corresponding to each object distance difference, and a linear prediction model corresponding to each parameter value; and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
An embodiment of the present invention further provides an electronic device, which includes a processor, and the processor is configured to implement the steps of any one of the above methods for tracking and focusing when executing a computer program stored in a memory.
An embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of any one of the above-mentioned methods for tracking focus.
The embodiment of the invention provides a method, a device, equipment and a medium for tracking and focusing, wherein the method obtains the actual object distance of a tracked object through radar ranging, predicts the object distance of the next frame by using a linear prediction model according to the object distance difference of every two adjacent frames in the current frame and the preset number of image frames before the current frame, and adjusts the focal length of image acquisition equipment by combining the predicted object distance of the next frame. According to the embodiment of the invention, the object distance of the next frame is predicted according to the object distance difference of every two adjacent frames in the current frame and the preset number of image frames before the current frame and the linear prediction model stored at present, the algorithm is simple, and the object distance can be obtained in time, so that the timeliness of focusing is ensured, and the whole focusing effect is enabled to be clear.
Drawings
Fig. 1 is a schematic diagram of a tracking process according to an embodiment of the present invention;
FIG. 2 is a schematic view of a zoom following curve at different object distances according to an embodiment of the present invention;
FIG. 3 is a flowchart of a specific tracking focusing method;
FIG. 4 is a schematic structural diagram of an apparatus for tracking focus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to ensure timeliness of focusing in a tracking process, embodiments of the present invention provide a method, an apparatus, a device, and a medium for tracking focusing.
Example 1:
fig. 1 is a schematic diagram of a process of tracking focus according to an embodiment of the present invention, where the process includes the following steps:
s101: and receiving the object distance between the image acquisition equipment and a measured object measured by the radar for the current frame, and determining the object distance difference between every two adjacent frames in the current frame and the preset number of image frames before the current frame.
The method for tracking and focusing provided by the embodiment of the invention is applied to image acquisition equipment, the image acquisition equipment is provided with a radar, and the image acquisition equipment can be an all-in-one machine provided with a long-focus movement and provided with the radar or a ball machine equipment provided with the long-focus movement and provided with the radar, and the method is not particularly limited in the embodiment of the invention.
When the image acquisition device tracks an object, the radar installed on the image acquisition device measures an object distance between the object and a measured object through ultrasonic waves, wherein the measured object is the tracked object during tracking.
In the embodiment of the invention, in order to facilitate object distance prediction, the object distance measured by the radar is saved for each image frame, and in order to further facilitate object distance prediction, the object distance difference of every two adjacent frames can be saved, wherein the object distance difference is the difference between the object distance of the image frame behind the two adjacent image frames and the object distance of the image frame ahead. When the object distance is predicted, the object distances of all frames before the current frame may not be needed due to the adopted linear prediction model, so that only the object distance difference between every two adjacent frames in the image frames of the set number before the current frame can be obtained according to the adopted linear prediction model.
For example, if the current frame is the tenth frame and the preset number is 4, the object distance of the current tenth frame and the object distance difference between every two adjacent frames in the four frames before the tenth frame may be obtained, that is, the object distance difference between the tenth frame and the ninth frame, the object distance difference between the ninth frame and the eighth frame, the object distance difference between the eighth frame and the seventh frame, and the object distance difference between the seventh frame and the sixth frame may be obtained respectively.
It should be noted that the preset number is determined according to a stored linear prediction model, and the linear prediction model only needs to store a number of sets of object distance differences corresponding to the stored number values.
S102: and predicting the object distance of the next frame according to the object distance difference of every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value.
In order to ensure the timeliness of focusing, the linear prediction model is adopted to predict the object distance in the embodiment of the invention, and the linear prediction model is a linear function based on the object distance difference, so that the object distance can be predicted simply and conveniently in time, and the object distance can be predicted in time, thereby ensuring the timeliness of focusing.
Specifically, in order to predict the object distance, the linear prediction model stored in the embodiment of the present invention includes a plurality of parameters, and currently stores a parameter value of each parameter, and the correspondence between the linear prediction model and the parameters may be represented by the following formula:
yi=x0*w0+x1*w1+……+xn*wn+b
wherein, wiFor each object distance difference corresponding parameter, xjFor each object distance difference, b is another parameter, yiIs a predicted object distance difference, where w0Is a parameter corresponding to the object distance difference between the current frame and the previous frame, w1The parameter corresponding to the object distance difference between the object distance of the previous frame of the current frame and the object distance of the previous frame, and the following parameters are analogized in sequence.
For convenience of prediction, the parameter values of the parameters corresponding to each object distance difference are stored in the image acquisition equipment, so that after the object distance difference between every two adjacent frames in a preset number of image frames before the current frame and the current frame is determined, the object distance difference between the next frame and the current frame can be predicted by adopting the formula according to the currently stored parameter values of the parameters and the object distance differences, and the object distance of the next frame can be predicted because the object distance of the current frame is measured.
S103: and adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
And when the object distance of the next frame is predicted, determining the focal length corresponding to the object distance based on the zooming following curve, thereby adjusting the focal length. The zoom following curve is generally a curve with a certain radian, and a lens manufacturer can provide the zoom following curve at different object distances when the lens manufacturer leaves a factory, and the physical meaning of the zoom following curve can be understood as that the zoom motor and the focus motor at a specified object distance move along the object distance curve clearly in the whole process, fig. 2 is a schematic diagram of the zoom following curve at different object distances provided by the embodiment of the present invention, in fig. 2, the horizontal axis represents a zoom position, and the vertical axis represents a focus position.
Specifically, taking fig. 2 as an example, a zoom following curve at a corresponding object distance can be found in fig. 2 according to the predicted object distance, where it is explained that a corresponding value of the zoom position is fixed on the zoom following curve in the figure because the employed image capturing apparatus does not need to change the zoom position. Therefore, according to the value of the zoom position of the image acquisition equipment, the value of the focus position corresponding to the value of the zoom position is found on the zoom following curve of the corresponding object distance, and the focal length is adjusted according to the value of the focus position.
For example, assuming that the predicted object distance is 5 meters and the value of the zoom position of the current image pickup apparatus is 301, a zoom following curve with an object distance of 5 meters can be found in fig. 2, and the value of the focus position of the zoom position of 301 is found to be 400 on the zoom following curve, and the value of the focus position of the current image pickup apparatus is adjusted to be 400.
The zoom following curves given by different lens manufacturers are different, the predicted object distance and the corresponding zoom position can be determined according to actual conditions, and the focal length is determined and adjusted according to the zoom following curves.
According to the embodiment of the invention, the actual object distance of the tracked object is obtained through radar ranging, the object distance difference of the next frame is predicted by utilizing a linear prediction model according to the object distance difference of every two adjacent frames in the preset number of image frames before the current frame and the current frame, the focal length of the image acquisition equipment is adjusted by combining the predicted object distance of the next frame, the algorithm is simple, no excessively huge calculation amount exists, the object distance of the next frame can be predicted in advance, the focusing speed meets the real-time requirement, and the whole focusing effect becomes possible clearly. And the object distance received in real time is continuously used for prediction, so that the focusing accuracy is further improved.
Example 2:
to implement object distance prediction, on the basis of the above embodiment, in an embodiment of the present invention, predicting an object distance of a next frame according to the object distance difference between every two adjacent frames, the currently stored parameter values corresponding to each object distance difference, and the linear prediction models corresponding to the parameter values includes:
determining the object distance difference between the next frame and the current frame according to the object distance difference between every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
The linear prediction model is a multiple linear regression equation which can calculate the object distance difference of the next frame before the tracked object changes the object distance of the next frame.
The linear prediction model stored in the embodiment of the present invention includes a plurality of parameters, and currently stores a parameter value of each parameter, and the correspondence between the linear prediction model and the parameters may be represented by the following formula:
y=x0*w0+x1*w1+……+xn*wn+b
wherein, wiFor each object distance difference corresponding parameter, xiFor each object distance difference, b is another parameter, yiIs a predicted object distance difference, where w0Is a parameter corresponding to the object distance difference between the current frame and the previous frame, w1The parameter corresponding to the object distance difference between the object distance of the previous frame of the current frame and the object distance of the previous frame, and the following parameters are analogized in sequence.
The above linear prediction model can also be simplified as:
Y=WT*X+b。
the object distance difference of every two adjacent frames is the difference between the object distance of the next image frame and the object distance of the previous image frame in the two adjacent image frames. When the object distance is predicted, the object distances of all frames before the current frame may not be needed due to the adopted linear prediction model, so that only the object distance difference between every two adjacent frames in the image frames of the set number before the current frame can be obtained according to the adopted linear prediction model.
And substituting the object distance difference of every two adjacent frames in the set number of image frames before the current frame into the linear prediction model corresponding to each parameter value corresponding to each object distance difference stored currently, so as to solve the object distance difference between the next frame and the current frame.
And adding the object distance of the current frame and the solved object distance difference between the next frame and the current frame to obtain the predicted object distance of the next frame.
For example, the current frame is the ith frame, DiFor the actual object distance of the current frame, assume DiIf the number is set to 4 at 10, the stored 4 frames before the current frame are acquiredObject distance difference, X, between two adjacent framesiFor the value of 4 consecutive object distance changes of the object before the current frame, i.e. the object distance difference between every two adjacent frames of the current frame and the 4 frames before the current frame, assume XiAssuming that W is {1, 0, 0, 0}, substituting parameter values corresponding to each object distance set into the linear prediction model, and calculating the object distance difference y between the next frame and the current frame by using the linear prediction modeliWhen D is 2, mixingiAnd yiAdding to obtain the predicted object distance D of the next framej=12。
It should be noted that, the parameter values in the embodiments of the present invention are merely examples, and are only for convenience of explaining a specific implementation process, different parameter values may be set according to different actual situations in a specific operation, and the specific parameter values are not limited to the methods provided in the embodiments of the present invention.
In the embodiment of the invention, the object distance of the next frame can be calculated by combining the object distance of the current frame and the object distance of every two adjacent frames in the preset number of image frames before the current frame with the currently stored linear prediction model, so that the object distance of the next frame can be predicted, the algorithm is simple, too large calculation amount is avoided, the object distance of the next frame can be predicted in advance, the focusing speed meets the real-time requirement, and the continuity of tracking and focusing is improved.
Example 3:
according to the above description of the embodiments, it can be known that, in the embodiments of the present invention, the linear prediction model is used to predict the object distance, the parameter values of the parameters in the linear prediction model that are currently stored may be preset initial values or updated values, and in order to predict the object distance more accurately, the parameter values of the parameters in the linear prediction model may be periodically updated in the embodiments of the present invention. On the basis of the foregoing embodiment, in an embodiment of the present invention, after predicting the object distance of the next frame, the method further includes:
updating the prediction times stored in the current iteration period;
judging whether the updated prediction times are larger than a preset number threshold value or not;
if yes, updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
Specifically, in order to further improve the accuracy of focusing, an iteration cycle is preset, a parameter value corresponding to each parameter of the linear prediction model is fixed in one iteration cycle, a quantity threshold corresponding to the prediction times is preset for each iteration cycle, and when the number of times of predicting the object distance reaches the preset quantity threshold, it is indicated that the parameter value of each parameter of the linear prediction model can be updated.
The number threshold set for each iteration cycle may be the same or different, and for convenience of calculation and reduction of the set workload, the number threshold set for each iteration cycle is the same, for example, the number threshold may be 25 or 30, and the like. In order to enable the updating of the respective parameter values, the number threshold is larger than the number of parameters included in the linear prediction model.
In order to further ensure more accurate object distance prediction, the updating of the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model includes:
calculating the object distance difference corresponding to each two adjacent image frames;
and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
The least squares method, also known as the least squares method, is to find the best functional match of the data by minimizing the sum of the squares of the errors. Unknown data can be easily obtained by the least square method, and the sum of squares of errors between these obtained data and actual data is minimized.
In the embodiment of the present invention, a least square method may be used to find the parameter values of each parameter in the corresponding linear prediction model that can minimize the sum of squares of the error between the predicted object distance and the actual object distance through a residual sum of squares function, and then the linear prediction model may be expressed as:
yi=x0*w0+x1*w1+……+xn*wn+b
the meaning of the linear prediction model is explained in the above embodiments, and will not be described herein.
The residual sum of squares function is:
when this residual sum-of-squares function is employed in embodiments of the present invention, YiThe difference of the actual object distance of the ith frame measured by the radar in the current iteration period and the actual object distance of the corresponding previous frame, yiAnd m is the number of times of predicting the object distance in the current iteration period, and the value of m is equal to the quantity threshold value set for the current iteration period.
Will find yiSubstituting the linear prediction model into the residual sum of squares function, namely:
in order to make the predicted object distance closer to the actual object distance, i.e. to find the parameter values of the parameters of the linear prediction model that minimize the sum of the squares of the residuals, which is at least 0, the parameter values of the parameters of the linear prediction model that can make the sum of the squares of the residuals s (w) at 0 are solved.
If the number threshold of the current iteration cycle is assumed to be m, m predicted object distances and actual object distances corresponding to the predicted image frames are obtained after one iteration cycle.
And after the object distance difference between each image frame predicted in the current iteration period and the actual object distance of the corresponding previous frame and the corresponding object distance difference adopted in the process of predicting the object distance are substituted into the residual square sum function, determining the parameter value of each parameter corresponding to the linear prediction model based on a calculation method of a least square method, thereby realizing the updating of the parameter value.
In the embodiment of the present invention, a gradient descent method may be further used to find the parameter values of the parameters in the corresponding linear prediction model that can minimize the sum of squares of the error between the predicted object distance and the actual object distance through a residual sum of squares function, so that the linear prediction model may be represented as:
yi=x0*w0+x1*w1+……+xn*wn+b
the residual sum of squares function is:
the meaning of the linear prediction model and the residual sum-of-squares function has been explained in the above embodiments, and will not be described herein.
Will find yiSubstituting the linear prediction model into the residual sum of squares function, namely:
substituting the object distance difference between each image frame predicted in the current iteration period and the actual object distance of the corresponding previous frame and the corresponding object distance difference adopted when the object distance is predicted into the residual square sum function, taking each parameter in the residual square sum function as an unknown number, solving a gradient, iteratively changing the size of the product of the preset step value and the gradient once on the basis of a gradient descent method on the parameter value of each parameter of the currently stored linear prediction model, and finding out the parameter value of each parameter when the current residual square sum value is basically not changed, thereby realizing the updating of the parameter value.
In the embodiment of the invention, the updating of the parameter value is realized by optimizing the linear prediction model in each iteration period. The linear prediction model is corrected and improved to a certain extent, so that the value of the object distance predicted by using the updated linear prediction model in the next iteration period is closer to the actual object distance value, the focusing accuracy is further improved, and the whole tracking process becomes possible clearly.
Example 4:
to further ensure the accuracy of focusing, on the basis of the above embodiments, in an embodiment of the present invention, after predicting the object distance of the next frame, before adjusting the focal length of the image capturing device according to the predicted object distance of the next frame, the method includes:
judging whether the difference value of the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is within the focal depth range corresponding to the current frame;
if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
If the difference value is not in the focus depth range corresponding to the current frame, the method further comprises:
determining the range of the adjusted focal length according to the focal length corresponding to the current frame and the focal depth range;
and adjusting the focal length of the image acquisition equipment according to any value in the range.
When the focal length moves within the range, a user basically cannot obviously feel the change of the definition of the image, and the thickness is the focal depth range.
According to the focal depth range formula:
f is the focal length, D is the aperture size, and λ is the spectral wavelength.
And calculating the focal depth range corresponding to the current frame, searching a focus position value, namely a focal length value, corresponding to the object distance of the predicted object distance and the object distance of the current frame in the object distance curve through the zooming following curve, calculating the difference value between the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame, judging whether the difference value is in the focal depth range corresponding to the current frame, and adjusting the focal length according to the judgment result.
If the difference value between the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is within the focal depth range corresponding to the current frame, the predicted object distance of the next frame is more reliable, the focal length of the predicted object distance of the next frame does not deviate from the focal length of the current frame too much, and the focal length value corresponding to the predicted object distance can be found out from the object distance curve according to the predicted object distance of the next frame to focus the lens.
If the difference value between the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is not within the focal depth range corresponding to the current frame, it is indicated that the predicted object distance of the next frame is unreliable, the focal length of the predicted object distance of the next frame deviates too much from the focal length of the object distance of the current frame, the range where the adjusted focal length is located is determined in the object distance curve according to the focal length and the focal depth range corresponding to the current frame, the range is a range in which the focal length corresponding to the current frame can be adjusted up and down according to the absolute value of the focal depth range, the range cannot exceed +/-delta, and the focal length of the image acquisition device is adjusted according to any value in the range.
Therefore, in the embodiment of the invention, whether the focal length is adjusted according to the predicted object distance is limited by determining the focal depth range, so that the accuracy of focal length adjustment is further improved to a certain extent, and the whole focusing effect becomes clear.
Fig. 3 shows a specific tracking focusing method flow, which includes:
s301: and receiving the object distance between the image acquisition equipment and the measured object, which is returned by the radar and measured for the current frame.
S302: and calculating the difference between the object distances measured in the current frame and the previous frame, and updating the difference into the stored object distance difference.
S303, determining the object distance difference of every two adjacent frames in the current frame and the preset number of image frames before the current frame.
And S304, determining the object distance difference between the next frame and the current frame according to the object distance difference between every two adjacent frames, the parameter values corresponding to each object distance difference stored currently and the linear prediction models corresponding to the parameter values.
S305: and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
S309: and judging whether the difference value of the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is in the focal depth range corresponding to the current frame, if so, performing S310, and if not, performing S311.
S310: and finding out a focal length value corresponding to the predicted object distance in the object distance curve according to the predicted object distance of the next frame, and focusing the lens.
S311: determining the range of the adjusted focal length in the object distance curve according to the focal length and the focal depth range corresponding to the current frame, and then executing S312.
S312: and adjusting the focal length of the image acquisition equipment according to any value in the range.
S306: and updating the prediction times stored in the current iteration period.
S307: and judging whether the updated prediction times are larger than a preset number threshold, if so, executing the step S308, otherwise, ending the process.
S308: and updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration period and the linear prediction model.
The above-described steps S306 to S308 may be performed at any time after the step S305.
The following describes the object distance tracking and focusing process provided by the embodiment of the present invention in detail through specific embodiments.
Initializing parameter values of each parameter W of the linear prediction model, wherein the parameter values W of each parameter after initialization are {1, 0, 0nFor convenience of flow explanation, it is assumed here that the preset number is n and n is 4, the number threshold of iteration cycles is m, and W is assumed to be {1, 0, 0, 0 }.
First, DiOf current frames measured for radarObject distance, assume DiCalculating the object distance difference x between the object distances measured by the radar in the previous frame as 10iLet x bei=2,XiX is the object distance difference value of every two adjacent frames in the current frame and 4 frames before the current frameiUpdate into X2iIn, suppose XiThe object distance change of the next state is calculated as y through a linear model, namely {2, 2, 1, -1}i2. In the initial stage, the assignment of W has great randomness, and the object distance obtained according to the linear prediction model is unreliable, so that the object distance is switched in the focal depth range only when the difference value between the predicted object distance and the focal distance corresponding to the current object distance is obtained, otherwise, only the focal distance position corresponding to the object distance in the focal depth range is issued. Assuming that the current depth of focus range is δ, if the object distance D is predictedi+yiIf the corresponding focus value meets the switching condition (within the focal depth range), switching, otherwise, determining the range of the adjusted focal length in the object distance curve according to the focal length and the focal depth range of the current frame, wherein the range is the range in which the focal length corresponding to the current frame can be adjusted up and down according to the absolute value of the focal depth range, and cannot exceed +/-delta, adjusting the focal length of the image acquisition equipment according to any value in the range, and assuming that the range is +/-delta/2, adjusting the focal length to be the focal length value corresponding to the focal length +/-delta/2 of the current frame;
secondly, obtaining the current object distance D measured by the radar in the next framejLet D bej13, mixing D withjAnd DiDifference value Y ofiUpdate difference to X ═ 3j={Yi,Xi2,Xi3,Xi4};
And thirdly, repeatedly executing the first step and the second step m times according to a quantity threshold value m set in an iteration cycle to obtain a data set with the length of m, wherein the data set comprises m predicted object distances in the current iteration cycle, m actual object distances corresponding to predicted image frames and a corresponding object distance difference adopted when the object distances are predicted in m groups, solving optimal parameters W and b by a gradient descent or least square method, and entering the next cycle.
It should be noted that, each parameter in the foregoing embodiments of the invention is only an example, and is only for convenience of explaining a specific implementation process, different parameters may be set according to different actual situations in a specific operation, and the specific parameters are not limited to the method provided in the embodiments of the invention.
Example 5:
fig. 4 is a schematic structural diagram of a device for tracking and focusing according to an embodiment of the present invention, where the device for tracking and focusing according to the embodiment of the present invention includes:
the statistical data module 401 is configured to receive an object distance, measured for a current frame, from an image acquisition device and a measured object, where the object distance is returned by a radar, and determine an object distance difference between each two adjacent frames in the current frame and a preset number of image frames before the current frame;
a predicting module 402, configured to predict an object distance of a next frame according to the object distance difference between every two adjacent frames, each currently stored parameter value corresponding to each object distance difference, and a linear prediction model corresponding to each parameter value;
a focusing module 403, configured to adjust a focal length of the image capturing device according to the predicted object distance of the next frame.
Wherein, the device still includes:
an updating module 404, configured to update the prediction times stored in the current iteration cycle after the prediction module predicts the object distance of the next frame, determine whether the updated prediction times is greater than a preset number threshold, and if so, update each currently stored parameter value according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
Further, the updating module 404 is specifically configured to calculate an object distance difference corresponding to each two adjacent image frames; and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
Specifically, the focusing module 403 is further configured to determine whether a difference between a focal length corresponding to the predicted object distance and a focal length corresponding to the current frame is within a focal depth range corresponding to the current frame; if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
Correspondingly, the focusing module 403 is further configured to determine a range of the adjusted focal length according to the focal length corresponding to the current frame and the focal depth range if the difference is not within the focal depth range corresponding to the current frame; and adjusting the focal length of the image acquisition equipment according to any value in the range.
Preferably, the predicting module 402 is specifically configured to determine an object distance difference between the next frame and the current frame according to the object distance difference between each two adjacent frames, each currently stored parameter value corresponding to each object distance difference, and a linear prediction model corresponding to each parameter value; and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
According to the embodiment of the invention, the actual object distance of the tracked object is obtained through radar ranging, the object distance difference of the next frame is predicted by using a linear prediction model according to the object distance difference of every two adjacent frames in the preset number of image frames before the current frame and the current frame, and the focal length of the image acquisition equipment is adjusted by combining the predicted object distance of the next frame. The method for tracking and focusing provided by the embodiment of the invention can calculate the object distance of the next frame only by combining the object distance difference of the current frame and every two adjacent frames in the preset number of image frames before the current frame and the currently stored linear prediction model and combining the object distance of the current frame, has simple algorithm and no excessively large calculation amount, can predict the object distance of the next frame in advance, enables the focusing speed to meet the real-time requirement, enables the focusing effect to be clear in the whole process, and further improves the focusing accuracy by continuously receiving the object distance and the object distance difference of every two adjacent frames in the preset number of image frames before the current frame in real time for prediction.
Example 6:
as shown in fig. 5, a schematic structural diagram of an electronic device according to an embodiment of the present invention is further provided, and on the basis of the foregoing embodiments, an electronic device according to an embodiment of the present invention further includes a processor 51 and a memory 52;
the processor 51 is adapted to carry out the steps of the above-described method of tracking focus when executing a computer program stored in the memory 52.
Alternatively, the processor 51 may be a CPU (central processing unit), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or a CPLD (Complex Programmable Logic Device).
A processor 51 for executing the following steps when following the computer program stored in the memory 52:
receiving the object distance between image acquisition equipment and a measured object measured by a radar for a current frame, and determining the object distance difference between the current frame and every two adjacent frames in a preset number of image frames before the current frame;
predicting the object distance of the next frame according to the object distance difference of every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
After predicting the object distance of the next frame, the method further comprises:
updating the prediction times stored in the current iteration period;
judging whether the updated prediction times are larger than a preset number threshold value or not;
if yes, updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
Further, the updating of the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model includes:
calculating the object distance difference corresponding to each two adjacent image frames;
and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
Specifically, after predicting the object distance of the next frame and before adjusting the focal length of the image capturing device according to the predicted object distance of the next frame, the method includes:
judging whether the difference value of the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is within the focal depth range corresponding to the current frame;
if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
Correspondingly, if the difference is not in the depth-of-focus range corresponding to the current frame, the method further includes:
determining the range of the adjusted focal length according to the focal length corresponding to the current frame and the focal depth range;
and adjusting the focal length of the image acquisition equipment according to any value in the range.
In addition, the predicting the object distance of the next frame according to the object distance difference between every two adjacent frames, the currently stored parameter values corresponding to each object distance difference and the linear prediction models corresponding to the parameter values includes:
determining the object distance difference between the next frame and the current frame according to the object distance difference between every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
The processor in the electronic device provided by the embodiment of the invention obtains the actual object distance of the tracked object through radar ranging according to the computer program stored in the readable storage medium, predicts the object distance difference of the next frame by using a linear prediction model according to the object distance difference of the current frame and every two adjacent frames in the preset number of image frames before the current frame, and adjusts the focal length of the image acquisition device by combining the predicted object distance of the next frame. And the object distance received in real time is continuously used for prediction, so that the focusing accuracy is further improved.
Example 7:
on the basis of the foregoing embodiments, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program executable by an electronic device is stored, and when the program is run on the electronic device, the electronic device is caused to execute the following steps:
receiving the object distance between image acquisition equipment and a measured object measured by a radar for a current frame, and determining the object distance difference between the current frame and every two adjacent frames in a preset number of image frames before the current frame;
predicting the object distance of the next frame according to the object distance difference of every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
After predicting the object distance of the next frame, the method further comprises:
updating the prediction times stored in the current iteration period;
judging whether the updated prediction times are larger than a preset number threshold value or not;
if yes, updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
Further, the updating of the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model includes:
calculating the object distance difference corresponding to each two adjacent image frames;
and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
Specifically, after predicting the object distance of the next frame and before adjusting the focal length of the image capturing device according to the predicted object distance of the next frame, the method includes:
judging whether the difference value of the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is within the focal depth range corresponding to the current frame;
if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
Correspondingly, if the difference is not in the depth-of-focus range corresponding to the current frame, the method further includes:
determining the range of the adjusted focal length according to the focal length corresponding to the current frame and the focal depth range;
and adjusting the focal length of the image acquisition equipment according to any value in the range.
In addition, the predicting the object distance of the next frame according to the object distance difference between every two adjacent frames, the currently stored parameter values corresponding to each object distance difference and the linear prediction models corresponding to the parameter values includes:
determining the object distance difference between the next frame and the current frame according to the object distance difference between every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and the linear prediction model corresponding to each parameter value;
and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs), etc.
The computer program stored in the readable storage medium provided by the embodiment of the invention is executed by the processor, the processor obtains the actual object distance of the tracked object through radar ranging, predicts the object distance difference of the next frame by using a linear prediction model according to the object distance difference of the current frame and every two adjacent frames in a preset number of image frames before the current frame, and adjusts the focal length of the image acquisition equipment by combining the predicted object distance of the next frame. And the object distance received in real time is continuously used for prediction, so that the focusing accuracy is further improved.
For the system/apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (12)
1. A method of tracking focus, applied to an image pickup apparatus mounted with a radar, comprising:
receiving the object distance between the image acquisition equipment and a measured object measured by the radar for the current frame, and determining the object distance difference between the current frame and each two adjacent frames in a preset number of image frames before the current frame;
predicting the object distance of the next frame according to the object distance difference of every two adjacent frames, each currently stored parameter value corresponding to each object distance difference and a linear prediction model containing each parameter value;
adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame;
wherein after predicting the object distance of the next frame, the method further comprises:
updating the prediction times stored in the current iteration period;
judging whether the updated prediction times are larger than a preset number threshold value or not;
if yes, updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
2. The method of claim 1, wherein the updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model comprises:
calculating the object distance difference corresponding to each two adjacent image frames;
and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
3. The method of claim 1, wherein after predicting the next frame object distance and before adjusting the focal length of the image acquisition device according to the predicted next frame object distance, the method comprises:
judging whether the difference value of the focal length corresponding to the predicted object distance and the focal length corresponding to the current frame is within the focal depth range corresponding to the current frame;
if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
4. The method of claim 3, wherein if the difference is not within the depth of focus range corresponding to the current frame, the method further comprises:
determining the range of the adjusted focal length according to the focal length corresponding to the current frame and the focal depth range;
and adjusting the focal length of the image acquisition equipment according to any value in the range.
5. The method of claim 1, wherein predicting the object distance of the next frame according to the object distance difference between every two adjacent frames, the currently stored parameter values corresponding to each object distance difference, and the linear prediction model containing the parameter values comprises:
determining the object distance difference between the next frame and the current frame according to the object distance difference between every two adjacent frames, each parameter value corresponding to each object distance difference stored currently and a linear prediction model containing each parameter value;
and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
6. An apparatus for tracking focus, comprising:
the statistical data module is used for receiving the object distance between the image acquisition equipment measured by the radar for the current frame and the measured object, and determining the object distance difference between the current frame and every two adjacent frames in the preset number of image frames before the current frame;
the prediction module is used for predicting the object distance of the next frame according to the object distance difference of each two adjacent frames, each currently stored parameter value corresponding to each object distance difference and a linear prediction model containing each parameter value;
the focusing module is used for adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame;
wherein, the device still includes:
and the updating module is used for updating the prediction times stored in the current iteration cycle after the prediction module predicts the object distance of the next frame, judging whether the updated prediction times are larger than a preset number threshold value, and if so, updating the currently stored parameter values according to the object distance corresponding to each image frame acquired in the current iteration cycle and the linear prediction model.
7. The apparatus according to claim 6, wherein the updating module is specifically configured to calculate an object distance difference corresponding to each two adjacent image frames; and updating each parameter value corresponding to the linear prediction model stored currently by adopting a least square method or a gradient descent method according to the object distance difference adopted when predicting the object distance of each image frame, the object distance difference between the predicted actual object distance of each image frame and the actual object distance of the corresponding previous frame and the linear prediction model.
8. The apparatus of claim 6, wherein the focusing module is further configured to determine whether a difference between a focal length corresponding to the predicted object distance and a focal length corresponding to the current frame is within a focal depth range corresponding to the current frame; if yes, adjusting the focal length of the image acquisition equipment according to the predicted object distance of the next frame.
9. The apparatus of claim 8, wherein the focusing module is further configured to determine a range of the adjusted focal length according to the focal length corresponding to the current frame and the focal depth range if the difference is not within the focal depth range corresponding to the current frame; and adjusting the focal length of the image acquisition equipment according to any value in the range.
10. The apparatus according to claim 6, wherein the prediction module is specifically configured to determine the object distance difference between the next frame and the current frame according to the object distance difference between each two adjacent frames, the parameter values corresponding to each object distance difference currently stored, and a linear prediction model including the parameter values; and determining the object distance of the next frame according to the object distance of the current frame and the object distance difference between the next frame and the current frame.
11. An electronic device, characterized in that the electronic device comprises a processor for implementing the steps of the method according to any of claims 1-5 when executing a computer program stored in a memory.
12. A computer-readable storage medium, characterized in that it stores a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910905486.6A CN110572577B (en) | 2019-09-24 | 2019-09-24 | Method, device, equipment and medium for tracking and focusing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910905486.6A CN110572577B (en) | 2019-09-24 | 2019-09-24 | Method, device, equipment and medium for tracking and focusing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110572577A CN110572577A (en) | 2019-12-13 |
CN110572577B true CN110572577B (en) | 2021-04-16 |
Family
ID=68782344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910905486.6A Active CN110572577B (en) | 2019-09-24 | 2019-09-24 | Method, device, equipment and medium for tracking and focusing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110572577B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112135055B (en) * | 2020-09-27 | 2022-03-15 | 苏州科达科技股份有限公司 | Zoom tracking method, device, equipment and storage medium |
CN112235509B (en) * | 2020-10-15 | 2022-04-01 | 北京小米移动软件有限公司 | Focal length adjusting method and device, mobile terminal and storage medium |
CN113905173B (en) * | 2021-08-30 | 2023-04-07 | 浙江大华技术股份有限公司 | Focusing method, focusing apparatus, and computer-readable storage medium |
CN114187328B (en) * | 2022-02-15 | 2022-07-05 | 智道网联科技(北京)有限公司 | Object detection method and device and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006065355A (en) * | 2005-10-31 | 2006-03-09 | Fuji Photo Film Co Ltd | Apparatus and method for automatic focusing |
CN101470324A (en) * | 2007-09-03 | 2009-07-01 | 三星电子株式会社 | Auto-focusing apparatus and method for camera |
CN104079832A (en) * | 2014-06-30 | 2014-10-01 | 苏州科达科技股份有限公司 | Automatic tracking and focusing method and system for integrated camera |
CN104202518A (en) * | 2014-08-25 | 2014-12-10 | 深圳市菲特数码技术有限公司 | Zooming method and system of integrated video camera |
CN105827961A (en) * | 2016-03-22 | 2016-08-03 | 努比亚技术有限公司 | Mobile terminal and focusing method |
CN109005347A (en) * | 2018-08-14 | 2018-12-14 | 高新兴科技集团股份有限公司 | A kind of assisted focused method of overlength zoom lens |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100589021C (en) * | 2008-03-13 | 2010-02-10 | 北京中星微电子有限公司 | Automatic focusing method and image collecting device |
CN103458159A (en) * | 2012-05-31 | 2013-12-18 | 鸿富锦精密工业(深圳)有限公司 | Electronic device with distance measurement function and distance measurement method |
US9804265B2 (en) * | 2012-08-08 | 2017-10-31 | Mitsubushi Electric Corporation | Radar device |
CN104506776A (en) * | 2015-01-09 | 2015-04-08 | 成都新舟锐视科技有限公司 | Automatic focusing system for real-time balling machine tracking |
CN106454123B (en) * | 2016-11-25 | 2019-02-22 | 盐城丝凯文化传播有限公司 | A kind of method and mobile terminal of focusing of taking pictures |
CN106598253B (en) * | 2016-12-23 | 2019-12-10 | 北京搜狐新媒体信息技术有限公司 | Data prediction method and device |
CN108427698A (en) * | 2017-08-29 | 2018-08-21 | 平安科技(深圳)有限公司 | Updating device, method and the computer readable storage medium of prediction model |
JP2019114902A (en) * | 2017-12-22 | 2019-07-11 | ルネサスエレクトロニクス株式会社 | Semiconductor device, imaging system, and program |
CN108259703B (en) * | 2017-12-31 | 2021-06-01 | 深圳市越疆科技有限公司 | Pan-tilt and pan-tilt tracking control method and device and pan-tilt |
CN109887040B (en) * | 2019-02-18 | 2020-04-14 | 北京航空航天大学 | Moving target active sensing method and system for video monitoring |
-
2019
- 2019-09-24 CN CN201910905486.6A patent/CN110572577B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006065355A (en) * | 2005-10-31 | 2006-03-09 | Fuji Photo Film Co Ltd | Apparatus and method for automatic focusing |
CN101470324A (en) * | 2007-09-03 | 2009-07-01 | 三星电子株式会社 | Auto-focusing apparatus and method for camera |
CN104079832A (en) * | 2014-06-30 | 2014-10-01 | 苏州科达科技股份有限公司 | Automatic tracking and focusing method and system for integrated camera |
CN104202518A (en) * | 2014-08-25 | 2014-12-10 | 深圳市菲特数码技术有限公司 | Zooming method and system of integrated video camera |
CN105827961A (en) * | 2016-03-22 | 2016-08-03 | 努比亚技术有限公司 | Mobile terminal and focusing method |
CN109005347A (en) * | 2018-08-14 | 2018-12-14 | 高新兴科技集团股份有限公司 | A kind of assisted focused method of overlength zoom lens |
Also Published As
Publication number | Publication date |
---|---|
CN110572577A (en) | 2019-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110572577B (en) | Method, device, equipment and medium for tracking and focusing | |
US9811732B2 (en) | Systems and methods for object tracking | |
JP7042905B2 (en) | Methods and devices for generating inverse sensor models, as well as methods for detecting obstacles | |
US9646389B2 (en) | Systems and methods for image scanning | |
CN114092820A (en) | Target detection method and moving target tracking method applying same | |
KR102068419B1 (en) | Method, apparatus and computer readable medium for adjusting point cloud data collection trajectory | |
CN108898624B (en) | Moving object tracking method and device, electronic equipment and storage medium | |
CN110278383A (en) | Focus method, device and electronic equipment, storage medium | |
JP7150841B2 (en) | Computer-implemented method, computer program product and apparatus | |
CN105472250A (en) | Automatic focusing method and device | |
CN111986512B (en) | Target distance determination method and device | |
CN109895100B (en) | Navigation map generation method and device and robot | |
Filipiak et al. | NSGA-II based auto-calibration of automatic number plate recognition camera for vehicle speed measurement | |
US10268188B2 (en) | Active camera movement determination for object position and extent in three-dimensional space | |
CN112949519B (en) | Target detection method, device, equipment and storage medium | |
CN115063454B (en) | Multi-target tracking matching method, device, terminal and storage medium | |
CN111985300A (en) | Automatic driving dynamic target positioning method and device, electronic equipment and storage medium | |
CN111784730A (en) | Object tracking method and device, electronic equipment and storage medium | |
Mussone et al. | An innovative method for the analysis of vehicle movements in roundabouts based on image processing | |
CN106507102A (en) | A kind of lens correction method and device | |
CN106056586B (en) | A kind of sub-pixel positioning method and device | |
WO2022099526A1 (en) | Method for training lane change prediction regression model, and lane change predicton method and apparatus | |
CN114627174A (en) | Depth map generation system and method and autonomous mobile device | |
CN112464815A (en) | Video multi-target tracking method, device and equipment | |
CN112633352A (en) | Target detection method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |