CN111770271B - Automatic focusing method based on image processing - Google Patents

Automatic focusing method based on image processing Download PDF

Info

Publication number
CN111770271B
CN111770271B CN202010595340.9A CN202010595340A CN111770271B CN 111770271 B CN111770271 B CN 111770271B CN 202010595340 A CN202010595340 A CN 202010595340A CN 111770271 B CN111770271 B CN 111770271B
Authority
CN
China
Prior art keywords
image
focusing
search
value
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010595340.9A
Other languages
Chinese (zh)
Other versions
CN111770271A (en
Inventor
张喜林
孙治雷
郭金家
刘明
聂明照
翟滨
王利波
曹红
耿威
张现荣
徐翠玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202010595340.9A priority Critical patent/CN111770271B/en
Publication of CN111770271A publication Critical patent/CN111770271A/en
Application granted granted Critical
Publication of CN111770271B publication Critical patent/CN111770271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The invention discloses an automatic focusing method based on image processing, which takes the definition evaluation function value of each n position points as a section of local curve to carry out 'fitting' in the process of climbing search, and realizes fine search by judging whether the curve contains a slope peak larger than a set threshold value, namely, the curve is divided into n points again in a small range where the peak exists, the step length is reduced to carry out repeated search, if the local curve does not contain the slope peak larger than the set threshold value, the n position points are continuously moved along the original direction, and the operation is repeated to search the optimal focusing position. The method is based on an improved Robert gray function as an image definition evaluation function, and the idea of a curve fitting method is embedded into local search of a traditional hill climbing algorithm, so that the method not only inherits the simple reliability of the traditional hill climbing algorithm, but also can effectively get rid of the interference of pseudo-focal peaks, avoids the problem of complexity and time consumption caused by trapping of local peak values in the focusing search process, and can be widely applied to various imaging systems such as cameras and microscopes.

Description

Automatic focusing method based on image processing
Technical Field
The invention belongs to the technical field of image processing and automatic focusing, and particularly relates to an automatic focusing method based on image processing.
Background
With the rapid development of the intellectualization and automation of various imaging devices, the automatic focusing technology has become one of the key technologies in the fields of various types of imaging systems and computer vision at present. Autofocus refers to the process of obtaining a sharp image on an image detector (e.g., a CMOS or CCD detector) by adjusting the position of a lens during image acquisition. Autofocus is classified into active focusing and passive focusing according to its focusing principle. The active focusing is mainly based on distance measurement, a position with accurate focusing is obtained by measuring the distance between a target and a lens according to the distance calculation, and a moving device is directly driven to reach the position to complete focusing; passive focusing does not need to emit any energy or information to a target, the focusing state is analyzed only through received light and formed image information, and the movement device is controlled to repeatedly adjust position parameters according to a certain search strategy until the clearest image is obtained; auto-focusing based on image processing is a popular type of passive focusing currently.
With the development of digital image processing technology, more and more autofocus schemes are beginning to adopt image processing means. In the automatic focusing process based on image processing, a system camera is aligned to a target scene needing focusing, a stepping motor and other moving devices are driven to control the focusing position, the image acquired by a camera image sensor is subjected to definition calculation, the moving direction and the step length of the moving devices are controlled in a feedback mode according to the image definition until the position with the maximum definition is found to be the position with accurate focusing, and finally focusing is finished. The method is independent of the assistance of other equipment, is completely based on digital image processing and analysis, is more beneficial to the integration and miniaturization of the system, and reduces the system cost, so that the method becomes more and more popular in various imaging system applications (such as cameras, microscopes and the like).
Two key issues are mainly involved in image processing based autofocus implementation: one is the sharpness evaluation function and one is the focus search strategy. The definition evaluation function is also called a focusing evaluation function, the definition evaluation of the image is the digital reflection of the defocusing in-focus state of the system, and the accuracy of evaluation and calculation determines the precision of automatic focusing and is a key factor of the automatic focusing. Currently, many studies are made on the sharpness evaluation function, which is usually performed by calculating the sharpness (gradient) feature of an image. The focus search strategy is also important, as is how the focusing process is controlled so that the most accurate focusing effect can be achieved with faster and fewer step movements, determining the speed of autofocus. Compared with the research on the definition evaluation function, many problems in the aspect of focusing search strategy are not solved well.
There are many focusing search strategies proposed so far, among which the most widely used and highly automated is the hill-climbing algorithm. The hill climbing algorithm is proposed according to the unimodal property of the definition evaluation function, and the basic principle is that the position of a slope peak is judged through the increase and decrease change of the image definition at different positions before and after the focusing process. Specifically, the hill climbing algorithm starts from an initial position with a proper fixed step length, and when a position with a significantly changed definition evaluation value is encountered, the hill climbing algorithm judges and records that the position is in a climbing state; when the definition evaluation value begins to continuously descend, the position of the slope peak is judged to be passed, the slope peak is turned around and then climbed once from the opposite direction with a small step length, and the slope peak is repeatedly determined until the change of the maximum definition evaluation value is small enough, so that the optimal focusing position meeting the precision requirement is achieved. The hill climbing algorithm has stable performance and high automation degree, and can reach higher search precision, thereby being widely applied.
However, since the target scenes are various and the lighting conditions also vary differently, the sharpness evaluation function curve is hardly a monotonous and smooth single-peak curve under the real condition, but a plurality of pseudo focal peaks and local oscillations (as shown in fig. 1, which is a comparison between the sharpness evaluation function curve in an ideal state and an actual curve) appear on the sharpness evaluation function curve, so that the focusing search is very easy to fall into the local peak, and the focusing time is too long, and even the focusing fails. Aiming at the problems, many scholars propose improved schemes such as a curve fitting search algorithm, a Fibonacci search algorithm, a slope-based adaptive step size search algorithm and the like, but the algorithms have various problems of low stability, low adaptability, poor anti-noise performance and the like, and are difficult to popularize and apply. For example, based on a slope adaptive step search algorithm, since the slope cannot be effectively defined only by the definition evaluation values of two points before and after, the relative relationship between the slope and the step at the local peak has no strong correspondence, and the search failure rate is still high; the curve fitting search algorithm has a good effect near the extreme value, but has a large dependence on data near the extreme value, and once a large error occurs in individual data, the overall fitting result is greatly influenced, so that the extreme point is shifted.
Therefore, how to implement a better focus search strategy to avoid trapping in local peak search, improve focusing accuracy and shorten focus search time is a problem to be solved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an automatic focusing method based on image processing, and the focusing search strategy can effectively improve focusing accuracy, avoid trapping in local peak search and quickly find the optimal focusing position.
The invention is realized by adopting the following technical scheme: an auto-focusing method based on image processing, comprising the steps of:
step A, setting a forward limit marker L1 and a reverse limit marker L2 of a system movement device and an initial large step length D of focusing search, and setting a discrimination factor threshold G of a local curvem
B, driving the motion device to move forward, acquiring a digital image through an image sensor and calculating a definition evaluation value F (z) of the position point in each step of moving forward, and moving n position points in total;
c, calculating a curve discrimination factor G of a local area where the n advancing position points are located;
step D, judging whether G is smaller than a discrimination factor threshold value Gm
If G is not less than GmIf yes, the optimal focusing position is required to be contained in the range of the n position points, the previous point which is closest to the optimal focusing position point in the n position points is returned, the step length is reduced to be 2/n of the original step length, the steps B to D are repeatedly executed, the operation is stopped until the movement step length reaches the focusing precision required by the system, and the automatic focusing is finished;
if G is less than GmIf the optimal focusing position does not exist in the range, continuing to move 5 position points along the original direction, and re-executing the steps B to D for searching;
if the system can not focus in the positive and negative limit of the whole path, which indicates that the best focusing position can not be found totally, the initial position is returned, and the automatic focusing is finished.
Further, in the step B, the following method is adopted to calculate the image sharpness evaluation value f (z):
(1) converting the digital image into a corresponding gray scale image, wherein f (I, j) is the gray scale value at the (I, j) point in the gray scale image, and the gradient change value I (I, j) at the (I, j) point in the image is calculated by the Robert gray scale operator according to the following formula:
Figure BDA0002557237640000031
(2) the clearer the image is, the more obvious the edge detail information of the image is, the larger the gradient change is, and the more fuzzy the image is, the smaller the gradient change is, so that the gradient abrupt point is reserved in the focusing process, the gradient change value of the fuzzy area of the image is ignored, only the gradient change value of the relatively clear part is highlighted, and the threshold value T is increased for the purposesIf I (I, j)>TsIf not, setting I (I, j) to be 0, neglecting the gradient change value of the image partial fuzzy area, and finally accumulating the gradient change values of each point to obtain an image overall definition evaluation value F (z):
Figure BDA0002557237640000032
further, in the step C, the curve discrimination factor is calculated in the following manner:
Figure BDA0002557237640000033
wherein σ is a standard deviation of the sharpness evaluation values of the n position points,
Figure BDA0002557237640000034
is the arithmetic mean value of the sharpness evaluation values of the n position points.
Further, the image region used in step B for calculating the image sharpness evaluation value is a certain region of interest in the image, and is obtained by a specific window-taking manner, where the window-taking manner includes, but is not limited to, a central window-taking manner, an inverted T-shaped window-taking manner, and a non-uniform sampling window-taking manner, and the certain region of interest takes the entire image at maximum.
Further, the value range of n in the step B is 5-7, so as to obtain a finer focusing search range.
Further, in the step a, the initial large step D is smaller than the width of the steep area of the system lens.
Compared with the prior art, the invention has the advantages and positive effects that:
the method is based on the improved Robert gray function as an image definition evaluation function, and the idea of a curve fitting method is embedded into the local search of the traditional hill climbing algorithm, so that the method not only inherits the simple reliability of the traditional hill climbing algorithm, but also has the advantages of simple calculation, stable performance, high automation degree and good universality; meanwhile, the interference of a pseudo focal peak can be effectively avoided, the problem of complexity and time consumption caused by trapping into a local peak value in the focusing search process is avoided, and the balance between the focusing efficiency and the accuracy is stabilized.
Drawings
FIG. 1 is a diagram illustrating a comparison between a definition evaluation function curve and an actual curve in an ideal state;
FIG. 2 is a schematic diagram illustrating a focus search strategy according to an embodiment of the present invention;
fig. 3 is a basic flowchart framework diagram of an auto-focusing method based on image processing according to an embodiment of the present invention.
Detailed Description
In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be further described with reference to the accompanying drawings and examples. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and thus, the present invention is not limited to the specific embodiments disclosed below.
The scheme provides an automatic focusing method based on image processing, which is based on an improved Robert gray function as an image definition evaluation function, creatively proposes the idea of a curve fitting method to be embedded into local search of a traditional hill climbing algorithm, improves a focusing search strategy, and has the following basic principle:
in the hill climbing search process, the definition evaluation function value of each n position points is used as a local curve to perform 'fitting', whether the curve contains a slope peak larger than a set threshold value is judged, if yes, the n position points are indicated to contain an optimal focusing position (single peak) in the range, at this time, a front point which is closest to the optimal focusing position point (maximum value point) in the n position points is returned, the front point is used as a starting point, a rear point of the optimal focusing position point is used as an end point, the range is divided into n point reduction step lengths again to perform repeated search, the required focusing precision is finally achieved, if no slope peak larger than the set threshold value exists in the curve, the n position points are continuously moved along the original direction, the focusing operation is repeated to search for the optimal focusing position, and in the embodiment, the value range of n is 5-7, preferably 5.
Wherein, the focusing search strategy requires that the initial step size of the search should be more or slightly less than the steep zone width of the system lens; the focusing search strategy filters false focal peaks and local oscillation in the local search process by setting a discrimination factor threshold, and ensures that the set threshold under different target scene images has uniform and usable effect by scaling or normalizing a local curve; meanwhile, the selection of a focusing window (ROI) is not limited, and different window taking modes such as a central window taking mode, an inverted T-shaped window taking mode, a non-uniform sampling window taking mode and the like can be combined for use.
As shown in fig. 3, the present embodiment takes n-5 as an example, and describes a detailed implementation process of an auto-focusing method based on image processing:
1) setting an initial large step D of focusing search, wherein the step D cannot be too large and should be similar to or slightly smaller than the width of a steep area of a system lens, and setting a moving point count to be 0;
2) driving the motion device to travel 1 position point in a step D in the forward direction;
3) acquiring a digital image through an image sensor, converting the digital image into a corresponding gray scale image, and recording f (i, j) as a gray scale value at a (i, j) point in the gray scale image;
4) selecting a focusing window (ROI) in the gray level image, namely taking a window, wherein the window taking mode can be different modes such as a central window taking mode, an inverted T-shaped window taking mode, a non-uniform sampling window taking mode and the like;
5) for the processed image, the gradient change value at each pixel point of the image is calculated in parallel, and the gradient change value at the (I, j) point in the image is recorded as I (I, j), and the calculation is performed by the Robert operator method of the following formula:
Figure BDA0002557237640000041
6) the clearer the image is, the more obvious the edge detail information of the image is, the larger the gradient change is, and the more fuzzy the image is, the smaller the gradient change is, so that the gradient abrupt point is reserved in the focusing process, the gradient change value of the partially fuzzy area of the image is ignored, and the point is pointed toAfter processing the image, parallel computing and judging whether the gradient change value I (I, j) at the (I, j) point in the image is larger than a set threshold value T or notsIf I (I, j)>TsIf the point value is not equal to 0, the value of the point is reserved, otherwise, the point value is set to be I (I, j);
7) calculating and caching a definition evaluation value F (z) aiming at the processed image, and calculating by the following formula:
Figure BDA0002557237640000051
8) judging whether the moving point count reaches 5 points:
if not, returning to the step 2) for execution;
if yes, the discrimination factors G, G of the local curve are calculated in the following way, wherein sigma is the standard deviation of the definition evaluation values of 5 position points,
Figure BDA0002557237640000052
the arithmetic mean of the sharpness evaluation values of the 5 position points.
Figure BDA0002557237640000053
The implementation manner of the curve discrimination factor G is not limited to the foregoing one, and only needs to detect the existence of a hill from a local area curve and ensure that the set threshold effects under different target scene images are uniform and usable (i.e. in the related target scene, the threshold set by the discrimination factor can discriminate whether the extremum of the local area is a global maximum, that is, an optimal focusing position, and the designed discrimination factor needs to have higher sensitivity to data deviation and is not related to the size of a data value range), and this embodiment is not specifically limited herein;
9) judging whether G is smaller than discrimination factor threshold GmAnd shown in the combination of fig. 2:
if G is less than GmIf the optimal focusing position does not exist in the range, jumping to the step 13) for execution;
if G is not less than GmIf it is determined that the 5 position point ranges necessarily include the best focus position (single peak), step 10 is performed:
wherein, the threshold value G in the present embodimentmAnd TsAll the values are related to the hardware implementation of a specific system, and the set value ranges in different systems are different and need to be tested and determined for multiple times after the actual hardware implementation; for example a microscope device equipped with a 10 x objective and a 4 x eyepiece, a typical setting may be Gm=0.25,Ts=5。
10) The motion device is driven to move back to the previous point closest to the optimal focusing position point (the position point corresponding to the maximum value in the evaluation value) in the 5 position points;
11) setting a focus search step D as the original 2/5;
12) judging whether the focusing search step length D reaches the focusing positioning precision required by the system:
if yes, indicating that the focusing is successful, ending the automatic focusing process, and entering step 16) to execute;
if not, setting the moving point count to be 0, and returning to the step 2) for execution;
13) judging whether the moving device reaches a positive limit mark L1:
if not, setting the moving point count to be 0, and returning to the step 2) for execution;
if yes, go to step 14) to execute.
14) Driving the motion device to return to the initial moving position and setting the moving direction to be reverse;
15) determine whether the motion device has reached the reverse limit marker L2:
if not, setting the moving point count to be 0, and returning to the step 2) for execution;
if yes, the focusing is failed, the automatic focusing process is finished, and the step 16) is executed;
16) the moving means is driven to return to the initial moving position and the moving direction is set to the forward direction.
In practical application tests, 15 to 25 images (related to an initial position) need to be acquired and analyzed in most cases when a focusing process is completed, the time for acquiring and analyzing single images is averaged to 80 milliseconds (wherein the calculation time of a definition evaluation value is less than 20 milliseconds, and the image acquisition time is about 50 milliseconds), so that the time for completing the focusing process can be averaged to about 2 seconds without counting the movement time of a movement device. In an example of 100 point focusing tests for continuous non-planar surfaces, only 8 point focusing failures (returning to an initial position) and 1 point trapping pseudo-focal peaks exist, and in the case of focusing failures and trapping pseudo-focal peaks, the number of collected images is only about 35 to 40, and repeated searching of local slope peaks cannot be trapped for a long time. Therefore, the method provided by the invention is simple and reliable in calculation and high in efficiency, can effectively get rid of the interference of pseudo focal peaks, avoids the problem of complex time consumption caused by trapping of local peak values in the focusing search process, and stabilizes the balance between the focusing efficiency and the focusing accuracy.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention in other forms, and any person skilled in the art may apply the above modifications or changes to the equivalent embodiments with equivalent changes, without departing from the technical spirit of the present invention, and any simple modification, equivalent change and change made to the above embodiments according to the technical spirit of the present invention still belong to the protection scope of the technical spirit of the present invention.

Claims (5)

1. An auto-focusing method based on image processing, characterized by comprising the steps of:
step A, setting a forward limit marker L1 and a reverse limit marker L2 of a system movement device and an initial large step length D of focusing search, and setting a discrimination factor threshold G of a local curvem
B, driving the motion device to move forward, acquiring a digital image through an image sensor and calculating a definition evaluation value F (z) of the position point in each step of moving forward, and setting n position points in total;
step C, calculating a curve discrimination factor G of a local area where n advancing position points are located, and calculating by adopting the following method:
Figure FDA0002987575990000011
wherein σ is a standard deviation of the sharpness evaluation values of the n position points,
Figure FDA0002987575990000012
the arithmetic mean value of the definition evaluation values of the n position points is obtained;
step D, judging whether G is smaller than a discrimination factor threshold value Gm
If G is not less than GmIf yes, the optimal focusing position is required to be contained in the range of the n position points, the previous point which is closest to the optimal focusing position point in the n position points is returned, the step length is reduced to be 2/n of the original step length, the steps B to D are repeatedly executed, the operation is stopped until the movement step length reaches the focusing precision required by the system, and the automatic focusing is finished;
if G is less than GmIf the optimal focusing position does not exist in the range, continuing to move 5 position points along the original direction, and re-executing the steps B to D for searching;
if the system can not focus in the forward and reverse limit of the whole path, which indicates that the best focusing position can not be found totally, the system returns to the initial position and the automatic focusing is finished.
2. The image-processing-based autofocus method of claim 1, wherein: in the step B, the following method is adopted to calculate the image sharpness evaluation value f (z):
(1) converting the digital image into a corresponding gray scale image, wherein f (I, j) is the gray scale value at the (I, j) point in the gray scale image, and the gradient change value I (I, j) at the (I, j) point in the image is calculated by the Robert gray scale operator according to the following formula:
Figure FDA0002987575990000013
(2) increasing the threshold TsIf I (I, j)>TsIf not, setting I (I, j) to be 0, neglecting the gradient change value of the image partial fuzzy area, and finally accumulating the gradient change values of each point to obtain an image overall definition evaluation value F (z):
Figure FDA0002987575990000014
3. the image-processing-based autofocus method of claim 1, wherein: the image region used for calculating the image sharpness evaluation value in the step B is a certain region of interest in the image, and is obtained by a specific window taking mode, the window taking mode includes but is not limited to a central window taking mode, an inverted T-shaped window taking mode and a non-uniform sampling window taking mode, and the certain region of interest takes the whole image to the maximum.
4. The image-processing-based autofocus method of claim 1, wherein: and the value range of n in the step B is 5-7.
5. The image-processing-based autofocus method of claim 1, wherein: in the step A, the initial large step D is smaller than the width of a steep area of the system lens.
CN202010595340.9A 2020-06-28 2020-06-28 Automatic focusing method based on image processing Active CN111770271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010595340.9A CN111770271B (en) 2020-06-28 2020-06-28 Automatic focusing method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010595340.9A CN111770271B (en) 2020-06-28 2020-06-28 Automatic focusing method based on image processing

Publications (2)

Publication Number Publication Date
CN111770271A CN111770271A (en) 2020-10-13
CN111770271B true CN111770271B (en) 2021-07-27

Family

ID=72722082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010595340.9A Active CN111770271B (en) 2020-06-28 2020-06-28 Automatic focusing method based on image processing

Country Status (1)

Country Link
CN (1) CN111770271B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697531A (en) * 2020-12-30 2022-07-01 深圳中科飞测科技股份有限公司 Focusing method and system, equipment and storage medium
CN113792708B (en) * 2021-11-10 2022-03-18 湖南高至科技有限公司 ARM-based remote target clear imaging system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706609A (en) * 2009-11-23 2010-05-12 常州达奇信息科技有限公司 Image processing based fast automatic focusing method of microscope
CN106249325A (en) * 2016-10-14 2016-12-21 北京信息科技大学 A kind of bionical quick focus adjustment method of vision based on liquid lens
CN109005340A (en) * 2018-07-27 2018-12-14 武汉精测电子集团股份有限公司 The Atomatic focusing method and device of image definition curve matching are carried out based on least square method
CN109698901A (en) * 2017-10-23 2019-04-30 广东顺德工业设计研究院(广东顺德创新设计研究院) Atomatic focusing method, device, storage medium and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706609A (en) * 2009-11-23 2010-05-12 常州达奇信息科技有限公司 Image processing based fast automatic focusing method of microscope
CN106249325A (en) * 2016-10-14 2016-12-21 北京信息科技大学 A kind of bionical quick focus adjustment method of vision based on liquid lens
CN109698901A (en) * 2017-10-23 2019-04-30 广东顺德工业设计研究院(广东顺德创新设计研究院) Atomatic focusing method, device, storage medium and computer equipment
CN109005340A (en) * 2018-07-27 2018-12-14 武汉精测电子集团股份有限公司 The Atomatic focusing method and device of image definition curve matching are carried out based on least square method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于图像处理的舌象采集自动调焦算法;韦玉科 等;《山东大学学报(工学版)》;20110816;全文 *

Also Published As

Publication number Publication date
CN111770271A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN103155537B (en) Based on face detection and the continuous autofocus of tracking
EP2357788B1 (en) Autofocus with confidence measure
US8213782B2 (en) Predictive autofocusing system
CN109451244B (en) Automatic focusing method and system based on liquid lens
TWI324015B (en) Autofocus searching method
US7702229B2 (en) Lens array assisted focus detection
CN102591098B (en) Automatic focusing system and comprise its lens devices and image picking system
CN111770271B (en) Automatic focusing method based on image processing
US20220124252A1 (en) Methods and apparatus for defocus reduction using laser autofocus
CN107664899B (en) Automatic focusing method, device and system
CN109521547A (en) A kind of automatic focusing method and system of variable step
US20110217030A1 (en) Method to determine auto focus of a digital camera
CN107509023A (en) A kind of auto-focusing searching algorithm
CN110324536B (en) Image change automatic sensing focusing method for microscope camera
CN104683693A (en) Automatic focusing method
JP3226861B2 (en) Automatic focusing method
CN114500859A (en) Automatic focusing method, photographing apparatus, and storage medium
JP3891224B2 (en) Automatic focusing device
CN103529544A (en) Nano membrane thickness measuring instrument capable of automatically positioning and focusing
JP2013085089A (en) Object tracking device and camera
CN105607218A (en) Image auto-focusing method measurement data transmission device and method based on fuzzy entropy
CN106210551A (en) Picture pick-up device and control method thereof
US7884876B2 (en) Image-pickup apparatus, method of determining attachment of accessory device and method of distinguishing attached accessory device
Liu et al. A fast auto-focusing technique for multi-objective situation
CN104683694A (en) Terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant