CN112040136B - Automatic focusing optimization method based on clear domain and scotopic vision - Google Patents
Automatic focusing optimization method based on clear domain and scotopic vision Download PDFInfo
- Publication number
- CN112040136B CN112040136B CN202011011291.6A CN202011011291A CN112040136B CN 112040136 B CN112040136 B CN 112040136B CN 202011011291 A CN202011011291 A CN 202011011291A CN 112040136 B CN112040136 B CN 112040136B
- Authority
- CN
- China
- Prior art keywords
- value
- image
- scanning
- definition
- expressed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
The invention relates to the field of computer vision and automatic focusing, in particular to an automatic focusing optimization method based on a clear domain and a scotopic vision, which comprises the steps of evaluating an image by using a TEN method and carrying out first scanning by using an ADAM algorithm according to the current gradient self-adaptive control step length; when the motor steering occurs for the first time, evaluating the image according to a TEN method, and performing high-pass filtering on the obtained definition sequence value; carrying out second scanning, and carrying out second high-pass filtering when the motor turns for the second time; after the filtering of the first two times, continuously scanning until the clear domain range is scanned, stopping scanning, and fitting a peak value by using a curve fitting mode to obtain a focused image; compared with the conventional algorithm focusing, the method increases a high-pass filtering link, reduces the situation of misleading by a secondary peak during searching, and simultaneously reduces the derivative fluctuation of a non-peak area, so that the influence of step length noise during the operation of the self-adaptive searching algorithm is reduced.
Description
Technical Field
The invention relates to the field of computer vision and automatic focusing, in particular to an automatic focusing optimization method based on a clear domain and a scotopic vision.
Background
Auto focus systems are now being widely used in various fields of human society, including biomedical, public safety, industrial production, and the like. The importance of focusing methods for moving objects is also shown, and there is a continuous effort to obtain faster and more stable focusing effects. Dynamic focusing can be achieved by active, passive methods, including ultrasonic, infrared, and time-of-flight types, such as the camera emitting a beam or sound and receiving a reflected beam to estimate the distance between the camera and the subject; passive methods are divided into phase detection and contrast detection autofocus systems. The former measures the phase difference between two captured images to estimate the focus position. The latter measures the sharpness of each frame to find the best focus position.
The existing dynamic focusing method has good effect in many aspects, but still has disadvantages, such as tedious searching process for determining the peak value in passive focusing, and failure of searching for possible secondary peak values. For scotopic vision images, important information is usually hidden in the background environment of the image, and the response of human vision to changes in the brightness of a target does not depend on the absolute brightness of the target, but rather on local changes in the brightness of the target relative to the background brightness. The main purpose of image enhancement is to enhance the dark area and improve the contrast of the image, so that the image is clear and the requirement of people for watching the image by naked eyes is met. The important information in the image is highlighted, the information which is not needed or careless by people is weakened or eliminated, the difference between different object characteristics in the original image is increased, and the visual effect of human eyes on the original image is improved, so that the subsequent related image processing is facilitated.
Disclosure of Invention
Aiming at passive focusing of a secondary peak in the prior art, the invention provides an automatic focusing optimization method based on a clear region and scotopic vision, as shown in fig. 1, which specifically comprises the following steps:
evaluating the image by using a TEN method, and controlling the step length by using an ADAM algorithm to perform first scanning;
when the motor steering occurs for the first time, high-pass filtering is carried out on a definition value obtained by a current image definition function TEN: taking the average value of the current image definition function value as a threshold value, inhibiting the definition value lower than the average value, and keeping the definition value higher than the average value unchanged;
carrying out second scanning, carrying out second high-pass filtering when the motor turns for the second time, and also applying a first filtering threshold value to specialize a high-numerical definition function value;
and continuously scanning until the clear domain range is scanned, stopping scanning, and fitting a peak value by using a curve fitting mode to obtain a focused image.
Further, the raw data type of the image is RGB, and the pixel values of the image are normalized to between [0,1] before evaluation.
Further, the evaluation of the image using the TEN method includes:
wherein, FTenIs a gradient-based focus measurement; sx(i, j) and sy(i, j) are the result of the convolution of the image f (i, j) with the Sobel operator in the horizontal and vertical directions, respectively; m and N are the height and width of the image, respectively.
Further, the step size is controlled by using the Adam algorithm to perform the first scanning, and the method comprises the following steps:
s1 and t represent the number of searching steps, and the initial value is 0;
s2, introducing a random parameter for jumping out of the local peak, and obtaining a derivative-containing parameter at the t-th time, which is expressed as: gt←▽θft(θt-1);
S3 limiting the updated exponential moving average m of the step lengthtExpressed as: m ist←β1·mt-1+(1-β1)·gt;
and S7, updating the step size of the first scanning, and scanning according to the step size, wherein the step size of the first scanning is represented as:
wherein f istIs the first derivative of the image sharpness function at the t-th position, thetat-1Position of movement, g, calculated for the t-1 st timetIs the t th guide parameter +θIs the t-th random parameter; beta is a1、β2Is a constant; alpha is the learning rate, and epsilon is the minimum value compensation; and ← is a valuation symbol.
Further, when the motor steering occurs for the first time, the high-pass filtering is performed on the current image definition function value, that is, the high-value reduction is performed on the definition, and the low-value compensation formula is expressed as follows:
wherein, F (x) is the definition value of the corrected image at x, f (x) is the definition value of the x picture, ave11Obtaining the average value of sharpness, ave, for the first scan12Is less than ave11Average of the sharpness values of c11、c12Is the compensation factor for the first scan.
Furthermore, carry out the scanning for the second time, when the motor appeared turning to for the second time, carry out the high pass filtering for the second time, be applicable to the filtering threshold value for the first time equally, specialize high numerical value definition function value, carry out the high value to the definition promptly and subtract, the formula of low value compensation is expressed as:
wherein, F (x) is the definition value of the corrected image at x, f (x) is the definition value of the x picture, ave21Obtaining the average value of sharpness, ave, for the second scan22Is less than ave21Average of the sharpness values of c21、c22Is a compensation factor for the second scan.
Further, the solution process of the clear domain includes:
let h be the height of the object M, p be the distance moved by the prism, and u be the object distance at the time of first focusing1+ p, height of image projected on sensor m, focal length of camera f, distance between prism and cmos sensor v1;
After the prism moves, the second object distance is u2The distance between the second prism and the coms sensor is v2The second projection is represented as m on top of the cmos position1Bottom is denoted m2So that the radius of the circle of confusion is m1m2Focus offset of mm1;
Simultaneously setting an object N which can be focused after the prism moves and is focused on m1The point, where the moving distance p of the prism is obtained from the satisfied condition of the clear range, is expressed as: p is less than or equal to ((u)2-u1)·v1·f)/u2 2;
The invention aims at the automatic focusing optimization system of the secondary peak interference, thus enhancing the anti-interference performance; the clear image range is redefined, so that the searching speed is accelerated, and the problem of searching redundancy is solved. Compared with the conventional algorithm focusing, the method has the advantages that a high-pass filtering link is added, the situation of misleading by a secondary peak during searching is reduced, and meanwhile, the derivative fluctuation of a non-peak area is reduced, so that the influence of step length noise during the operation of the self-adaptive searching algorithm is reduced; the existing self-adaptive search algorithm has higher speed of searching the peak value and lower speed of confirming the peak value, because the self-adaptive search algorithm scans back and forth to determine the optimal solution near the peak value, however, the number of searching is increased, and the definition of a clear domain is increased, so that the number of searching is reduced, and the searching speed is increased.
Drawings
FIG. 1 is a schematic flow chart of a clear domain based autofocus optimization system according to the present invention;
FIG. 2 is a logic diagram of a clear domain based autofocus optimization system provided by the present invention;
FIG. 3 is a schematic diagram of the structure of a clear domain of the present invention;
fig. 4 is a schematic diagram of the effect of the clear field of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides an automatic focusing optimization method based on a clear domain and scotopic vision, as shown in figure 1, which specifically comprises the following steps:
evaluating the image by using a TEN method, controlling the step length by using an ADAM algorithm to carry out first scanning, namely converting a numerical value obtained by evaluating the image by using the TEN method into the step length by using the ADAM algorithm, and scanning by using the step length;
when the motor steering occurs for the first time, carrying out high-pass filtering on the current image definition function value, and specializing the high-numerical definition function value by taking the current average image definition function value as a calculated threshold;
carrying out second scanning, carrying out second high-pass filtering when the motor turns for the second time, and also applying a first filtering threshold value and a specialized peak value;
and continuously scanning until the clear domain range is scanned, stopping scanning, and fitting a peak value by using a curve fitting mode to obtain a focused image.
The high-pass filtering mechanism is used twice in the process, the reason is that the prior self-adaptive search algorithm searches roughly at the early stage and finely at the later stage, the ADAM algorithm has larger initial step length, less early stage search times and large span range; meanwhile, in order to ensure that the learning rate set by the acquired peak value accuracy is small, a large amount of fine search can be performed when the highest peak value is confirmed in the later search stage. Therefore, in order to help the search algorithm to find the highest peak more quickly, the definition value is adjusted only in the early search, so that the influence of small oscillation and secondary peak on the search is eliminated, and the real peak data is not influenced.
The TEN method is Tenengrad, a gradient-based focus measurement that has been widely used for auto-focus and can be expressed as:
wherein, FTenIs a gradient-based focus measurement; sx(i, j) and sy(i, j) are the result of the convolution of the image f (i, j) with the Sobel operator in the horizontal and vertical directions, respectively; m and N are the height and width of the image, respectively.
Wherein, the horizontal direction of the Sobel third-order operator is expressed as:
the vertical direction of the Sobel third-order operator is expressed as:
ADAM algorithm is a stochastic objective function based on first order gradients, and an optimization algorithm for adaptive estimation of low order moments, which combines the advantages of two most recent popular optimization methods: the ability of AdaGrad to handle sparse gradients and the ability of RMSProp to handle non-stationary targets, which is easy to implement and requires only little memory, demonstrates the analysis of the convex problem convergence rate in experiments. Adam has strong capacity in peak value search and can well solve various non-convex optimization problems, and the updating rule of the invention is as follows:
s1, t represents the number of search steps, which is initially 0:
t←t+1;
S2、ftfor the first derivative of the image sharpness function at the current position, thetat-1For the position of the last calculated movement, gtThe method comprises the following steps:
gt←▽θft(θt-1);
S3、mtis an exponential moving average for limiting step size update, beta1Is constant, preferably 0.9:
mt←β1·mt-1+(1-β1)·gt;
S4、vtfor squared gradients, for limiting step updates, β2Is constant, the preferred values are 0.999:
S7、θtfor the updated step length, α is the learning rate, which determines the learning speed of the step length, and the preferred value is 0.001, and e is the minimum value compensation, and the preferred value is 0.00000001:
when the motor turns for the first time, the current image definition function value is subjected to high-pass filtering treatment, the current average image definition function value is used as a threshold value, the high-value definition function value is specialized, the definition is subjected to high-value reduction, and the low-value compensation formula is given as follows:
F(x)=f(x)-|(f(x)-ave12)|·c11 f(x)<ave12<av11
F(x)=f(x)+|(f(x)-ave12)|·c12 ave12<f(x)<av11
wherein F (x) is the definition value of the corrected image at x, f (x) is the definition value of the x picture, ave11Average value, ave, obtained for the first scan12Is less than ave11Preferably, parameter c of the present embodiment11Value of 0.4, parameter c12The value is 0.8.
Carry out the scanning for the second time, when the motor appeared turning to for the second time, carry out the high pass filtering for the second time, be suitable for the filtering threshold value for the first time equally, specialize high numerical value definition function value, carry out the high value to the definition and subtract, the formula of low value compensation is given in following:
F(x)=f(x)-|(f(x)-ave22)|·c21 f(x)<ave22<av21
F(x)=f(x)+|(f(x)-ave22)|·c22 ave22<f(x)<av21
wherein F (x) is the definition value of the corrected image at x, f (x) is the definition value of the x picture, ave21Average value, ave, obtained for the second scan22Is less than ave21Preferably, parameter c of the present embodiment21Value of 0.6, parameter c22The value is 1.2.
And continuously scanning until the clear domain range is scanned, stopping scanning, and fitting a peak value by using a curve fitting mode to obtain a focused image. Wherein the solution flow for the clear domain width is given by:
as shown in FIG. 3, the prism combination is simplified into a convex lens, the height of an object M is set as h, the moving distance of the prism is set as p, and the object distance is set as u during the first focusing1+ p, height of image projected on sensor m, focal length of camera f, distance between prism and cmos sensor v1(ii) a After the prism moves, the second object distance is u2The distance between the second prism and the cmos sensor is v2The second projection is represented as m on top of the cmos position1Bottom is denoted m2So that the radius of the circle of confusion is m1m2Focus offset of mm1. Simultaneously setting an object N which can be focused after the prism moves and is focused on m1And (4) point.
As can be seen from fig. 3, the angle α can be represented by f and h, i.e.:
tan(α)=f/h;
meanwhile, the focusing angle is still alpha after the prism is moved due to the fact that the prism is supposed to move horizontally. According to the parallelogram principle, mm1Is equal to the length of the marked part in figure 3, so mm can be obtained1Is represented by:
mm1=p·tan(α);
meanwhile, according to FIG. 3, it can be easily found that the angle β can pass through h and u2Shown in and triangular nmx and xAm1Similarly, therefore may be up to m1m2Expression of (a):
tan(β)=h/u2;
Am1=(u2-u1)·v2/u2;
m1m2=Am1·tan(β);
to this end, m is obtained1m2And mm1The following ranges are obtained according to the satisfied condition of the clear domain:
p≤((u2-u1)·v1·f)/u2 2;
setting the peak point to p1The sharp domain can be represented as being at the clearest imageA range of (d);
the width of the clear domain is a fixed value for one camera, and only needs to be calculated according to different cameras, the peak point is a position, and the embodiment utilizes the peak point p1The size of the clear field is calculable given its location, but its location is uncertain.
Figure 4 shows the variation of the step size, with the horizontal axis representing the number of iterations and the vertical axis representing the step size as can be seen from the figure, the step size is larger at the beginning and decreases as the iterations progress. In fig. 4, the area between the red lines is the clear field range, and it can be seen that the iteration has completely entered the clear field range at 63 times, and if the clear field is not applied, it will be searched up to 192 times. When the search function searches near the peak, the step size iterates over a very small range, however, it is this large number of iterations that results in inefficient searching. It can be appreciated from the discussion of fig. 4 that the concept of a clear field presented herein is correct and does help to improve search efficiency.
The flow in the actual use process is shown in fig. 2, and the following operations are performed after the focusing is started:
acquiring an image, evaluating the image according to a TEN method, and calculating to obtain a definition function value;
judging whether the image is scanned for the first two rounds, namely whether the image is scanned for the first time or the second time, if the image is scanned for the first time, evaluating the image according to a TEN method when the motor steering occurs for the first time, and performing high-pass filtering on the obtained definition sequence value, namely, taking the average value of the definition function values of the current image as a threshold value, inhibiting the definition value lower than the average value, and keeping the definition value higher than the average value unchanged; if the scanning is the second scanning, when the motor turns for the second time, second high-pass filtering is carried out, namely the definition value lower than the average value is restrained by using a threshold value obtained by the first filtering, and the definition value higher than the average value is kept unchanged;
judging whether the peak is the highest peak, namely judging whether the step length is smaller than the width of the clear domain and larger than the peak value scanned historically, if so, considering that the highest peak is scanned, and ending focusing;
if not, the driving motor moves the prism, and judges whether the step length meets the clear domain, namely whether the step length is smaller than the clear domain, if so, the focusing is finished, otherwise, the first step is returned, the image is obtained again, and the clear domain is calculated.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. An automatic focusing optimization method based on a clear area and scotopic vision is characterized by comprising the following steps:
evaluating the image by using a TEN method, and performing first scanning by using an ADAM algorithm according to the gradient self-adaptive control step length of the current image pixel;
when motor steering occurs for the first time, evaluating the image according to a TEN method, and performing high-pass filtering on the obtained definition sequence value, namely, taking the average value of the definition function values of the current image as a threshold value, inhibiting the definition value lower than the average value, and keeping the definition value higher than the average value unchanged;
carrying out second scanning, and carrying out second high-pass filtering when the motor turns for the second time, namely, using a threshold value obtained by the first filtering to inhibit the definition value lower than the average value and keeping the definition value higher than the average value unchanged;
and after the two previous times of filtering, continuously scanning until the clear domain range is scanned, stopping scanning, and fitting a peak value by using a curve fitting mode to obtain a focused image.
2. The method of claim 1, wherein the image is of RGB type and the pixel values of the image are normalized to [0,1] before evaluation.
3. The method of claim 1, wherein evaluating the image using the TEN method comprises:
wherein, FTenIs a gradient-based focus measurement; sx(i, j) and sy(i, j) are the result of the convolution of the image f (i, j) with the Sobel operator in the horizontal and vertical directions, respectively; m and N are the height and width of the image, respectively.
4. The method of claim 1, wherein performing the first scan according to the gradient adaptive control step of the current image pixel by using an ADAM algorithm comprises:
s1 and t represent the number of searching steps, and the initial value is 0;
s2, introducing a random parameter for jumping out of the local peak, and obtaining a derivative-containing parameter at the t-th time, which is expressed as:
s3 limiting the updated exponential moving average m of the step lengthtExpressed as: m ist←β1·mt-1+(1-β1)·gt;
and S7, updating the step size of the first scanning, and scanning according to the step size, wherein the step size of the first scanning is represented as:
wherein f istIs the first derivative of the image sharpness function at the t-th position, thetat-1Position of movement, g, calculated for the t-1 st timetThe t-th time contains the conductance parameter,is the t-th random parameter; beta is a1、β2Is a constant; alpha is the learning rate, and epsilon is the minimum value compensation; and ← is a valuation symbol.
5. The method of claim 1, wherein when the motor steering occurs for the first time, the current image sharpness function value is high-pass filtered, i.e. the sharpness is high-value cut, and the low-value compensation is expressed by the following formula:
wherein, F (x) is the definition value of the corrected image at x, f (x) is the definition value of the x picture, ave11Obtaining the average value of sharpness, ave, for the first scan12Is less than ave11Average of the sharpness values of c11、c12Is the compensation factor for the first scan.
6. The method of claim 1, wherein the second scan is performed, and when the motor turns for a second time, the second high-pass filtering is performed, and the first filtering threshold is also applied, and the high-value sharpness function value is specified, i.e. the sharpness is clipped for a high value, and the low-value compensation is expressed as:
wherein, F (x) is the definition value of the corrected image at x, f (x) is the definition value of the x picture, ave21Obtaining the average value of sharpness, ave, for the second scan22Is less than ave21Average of the sharpness values of c21、c22Is a compensation factor for the second scan.
7. The method of claim 1, wherein the process of solving for the width of the clear region comprises:
let the height of the object M be set as h,the prism moves by a distance p and the object distance is u during the first focusing1+ p, height of image projected on sensor m, focal length of camera f, distance between prism and cmos sensor v1;
After the prism moves, the second object distance is u2The distance between the second prism and the coms sensor is v2The second projection is represented as m on top of the cmos position1Bottom is denoted m2So that the radius of the circle of confusion is m1m2Focus offset of mm1;
Simultaneously setting an object N which can be focused after the prism moves and is focused on m1The point, where the moving distance p of the prism is obtained from the satisfied condition of the clear range, is expressed as: p is less than or equal to ((u)2-u1)·v1·f)/u2 2;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011011291.6A CN112040136B (en) | 2020-09-23 | 2020-09-23 | Automatic focusing optimization method based on clear domain and scotopic vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011011291.6A CN112040136B (en) | 2020-09-23 | 2020-09-23 | Automatic focusing optimization method based on clear domain and scotopic vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112040136A CN112040136A (en) | 2020-12-04 |
CN112040136B true CN112040136B (en) | 2021-08-10 |
Family
ID=73575148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011011291.6A Active CN112040136B (en) | 2020-09-23 | 2020-09-23 | Automatic focusing optimization method based on clear domain and scotopic vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112040136B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115334244B (en) * | 2022-08-30 | 2024-05-03 | 浙江优亿医疗器械股份有限公司 | Endoscope automatic focusing search algorithm |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4141032A (en) * | 1976-12-08 | 1979-02-20 | Ernst Leitz Wetzlar Gmbh | Method of and apparatus for the expansion of the range of the depth of focus beyond the limit given by conventional images |
CN101494737A (en) * | 2009-03-09 | 2009-07-29 | 杭州海康威视数字技术股份有限公司 | Integrated camera device and self-adapting automatic focus method |
CN101840055A (en) * | 2010-05-28 | 2010-09-22 | 浙江工业大学 | Video auto-focusing system based on embedded media processor |
CN105578029A (en) * | 2015-09-01 | 2016-05-11 | 闽南师范大学 | Multi-scale variable-step autofocusing searching algorithm data transmission device and method |
CN108156371A (en) * | 2017-12-08 | 2018-06-12 | 北京航天计量测试技术研究所 | A kind of infrared auto-focusing method for fast searching |
CN110646933A (en) * | 2019-09-17 | 2020-01-03 | 苏州睿仟科技有限公司 | Automatic focusing system and method based on multi-depth plane microscope |
CN111105346A (en) * | 2019-11-08 | 2020-05-05 | 同济大学 | Full-scanning microscopic image splicing method based on peak value search and gray template registration |
-
2020
- 2020-09-23 CN CN202011011291.6A patent/CN112040136B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4141032A (en) * | 1976-12-08 | 1979-02-20 | Ernst Leitz Wetzlar Gmbh | Method of and apparatus for the expansion of the range of the depth of focus beyond the limit given by conventional images |
CN101494737A (en) * | 2009-03-09 | 2009-07-29 | 杭州海康威视数字技术股份有限公司 | Integrated camera device and self-adapting automatic focus method |
CN101840055A (en) * | 2010-05-28 | 2010-09-22 | 浙江工业大学 | Video auto-focusing system based on embedded media processor |
CN105578029A (en) * | 2015-09-01 | 2016-05-11 | 闽南师范大学 | Multi-scale variable-step autofocusing searching algorithm data transmission device and method |
CN108156371A (en) * | 2017-12-08 | 2018-06-12 | 北京航天计量测试技术研究所 | A kind of infrared auto-focusing method for fast searching |
CN110646933A (en) * | 2019-09-17 | 2020-01-03 | 苏州睿仟科技有限公司 | Automatic focusing system and method based on multi-depth plane microscope |
CN111105346A (en) * | 2019-11-08 | 2020-05-05 | 同济大学 | Full-scanning microscopic image splicing method based on peak value search and gray template registration |
Also Published As
Publication number | Publication date |
---|---|
CN112040136A (en) | 2020-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111432125B (en) | Focusing method and device, electronic equipment and storage medium | |
CN109521547B (en) | Variable-step-length automatic focusing method and system | |
CN101316368B (en) | Full view stabilizing method based on global characteristic point iteration | |
US5225940A (en) | In-focus detection apparatus using video signal | |
JP5247044B2 (en) | Imaging device | |
CN107664899B (en) | Automatic focusing method, device and system | |
JP5374119B2 (en) | Distance information acquisition device, imaging device, and program | |
CN109451244A (en) | A kind of automatic focusing method and system based on liquid lens | |
CN112203012B (en) | Image definition calculation method, automatic focusing method and system | |
CN107888819A (en) | A kind of auto focusing method and device | |
JP2007274037A (en) | Method and device for recognizing obstacle | |
JP6014452B2 (en) | FOCUS DETECTION DEVICE, LENS DEVICE HAVING THE SAME, AND IMAGING DEVICE | |
CN112040136B (en) | Automatic focusing optimization method based on clear domain and scotopic vision | |
JP2013037166A (en) | Focus detector, and lens device and imaging device having the same | |
CN114760419B (en) | Automatic focusing method and system based on deep learning | |
CN106772925A (en) | A kind of passive camera automatic focusing method based on inner product energy | |
CN117555123B (en) | Automatic focusing method and device for electron microscope | |
CN117319787A (en) | Image focusing method, device, system, control equipment and storage medium | |
US9854152B2 (en) | Auto-focus system for a digital imaging device and method | |
JPH05313068A (en) | Image input device | |
JP5127692B2 (en) | Imaging apparatus and tracking method thereof | |
JP3058781B2 (en) | Focusing point detection method | |
CN116980757A (en) | Quick focusing method, focusing map updating method, device and storage medium | |
EP3054668B1 (en) | Method and device for processing a video stream | |
Bezzubik et al. | Optimization of algorithms for autofocusing a digital microscope |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |