CN110428463B - Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element - Google Patents

Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element Download PDF

Info

Publication number
CN110428463B
CN110428463B CN201910481302.8A CN201910481302A CN110428463B CN 110428463 B CN110428463 B CN 110428463B CN 201910481302 A CN201910481302 A CN 201910481302A CN 110428463 B CN110428463 B CN 110428463B
Authority
CN
China
Prior art keywords
image
mean
center
roi
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910481302.8A
Other languages
Chinese (zh)
Other versions
CN110428463A (en
Inventor
杨甬英
王凡祎
楼伟民
白剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910481302.8A priority Critical patent/CN110428463B/en
Publication of CN110428463A publication Critical patent/CN110428463A/en
Application granted granted Critical
Publication of CN110428463B publication Critical patent/CN110428463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a method for automatically extracting a center of an image in an out-of-focus fuzzy centering of an aspheric optical element. The method uses a gray variance definition evaluation function as a measurement standard, and firstly obtains an interested area with highest definition in a rough focusing image, namely an area where the center of a cross reticle is located; and selecting the image with the highest SMD2 value of the ROI from the images acquired at odd equal steps, if the value of the image is smaller than a prior threshold value, using a trained IRCNN deep learning model or an improved dark channel prior algorithm to perform deblurring processing on the optimally focused image, then using adaptive threshold value binarization and morphological operation to obtain a connected domain of the cross reticle, and solving a maximum inscribed circle of the cross reticle, wherein the center of the inscribed circle is approximate to the center of the cross reticle. The invention solves the problem that the center of the centering cross reticle image of the aspheric optical element is difficult to extract due to out-of-focus blur generated by aspheric normal aberration larger than the depth of field of the system.

Description

Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element
Technical Field
The invention belongs to the technical field of machine vision detection, and particularly relates to a method for automatically extracting a center of an image in a rotationally symmetric aspheric optical element defocused fuzzy centering process.
Background
Aspheric optical elements are widely applied to systems such as large-aperture space telescopes, Inertial Confinement Fusion (ICF) systems, high-energy lasers and the like, and due to uncontrollable force in the production and manufacturing process, the surfaces of the elements have inevitable defects which not only affect the imaging quality of an optical system, but also cause energy loss and possibly secondary damage due to unnecessary scattering and diffraction in the high-energy laser system, so that the detection and evaluation of surface defects of the elements are needed, and before the detection, the aspheric elements are required to be centered, and the optical axes of the aspheric elements are as collinear as possible with the optical axes of the detection system. The centering process needs to adjust a shaft system according to the position of the center point of a centering cross reticle image, different from a spherical optical element, the aspheric optical element has normal aberration along the direction of an optical axis, and the normal aberration is increased along with the increase of the aspheric degree of a sample and the vertex spherical radius, while for a determined front objective lens, the depth of field of a centering system is fixed, when the normal aberration of the aspheric optical element is larger than the depth of field of the centering system, an image formed on a CCD exceeds the resolution capability of an adjacent pixel, at the moment, the image generates defocusing blur, and due to energy divergence, the brightness of the centering cross reticle image is reduced, the background gray scale is increased, and the centering cross reticle image center extraction is not beneficial. Under the condition that the depth of field of the system is fixed, the larger the normal aberration is, the lower the gray variance (SMD2) definition evaluation function value of the centered cross reticle image collected by the CCD is, and when the value is lower than a certain threshold value, the central coordinate of the cross reticle image cannot be extracted and obtained by means of the traditional image processing algorithm. In order to improve the measurement range of the system to the aspheric optical element as much as possible, a method which is flexible and can be artificially selected according to specific requirements and applied to automatically extracting the center of an image in the defocused fuzzy center of the aspheric optical element is provided based on an IRCNN deep learning deblurring model and an improved dark channel prior defogging algorithm.
Disclosure of Invention
The invention aims to solve the problem that the center of a centering cross reticle image of a rotationally symmetric aspheric optical element is difficult to extract due to out-of-focus blur caused by aspheric normal aberration larger than the depth of field of a system. A method for extracting the center of an image in a rotationally symmetric aspheric optical element defocused blur center is provided. The method specifically comprises the following steps:
the technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1, obtaining an ROI (region of interest) of an optimal focusing image by a centering system theory, judging a numerical value according to a gray variance SMD2 definition evaluation function of the ROI, and entering step 2 if the SMD2 definition numerical value of the ROI of the input image is larger than or equal to a priori definition threshold value; if the SMD2 definition value of the ROI of the input image is smaller than the prior definition threshold, entering step 3;
step 2, extracting the center of the cross reticle by adopting an image processing algorithm, and finishing the algorithm;
and 3, selecting an image deblurring method according to actual requirements, then calculating the SMD2 definition value of the deblurred image ROI, if the SMD2 definition value is still smaller than the prior definition threshold value, exiting the program and outputting warning information, and otherwise, returning to the step 2.
The specific operation of the step 1 is as follows:
1-1, adjusting X (S6) and Y (S7) axes to move the rotational symmetric aspheric optical element (S4) to an initial position, adjusting a Z-direction guide rail (S10) according to a focal length f of the replacement objective lens (S3) to make the upper surface of the rotational symmetric aspheric optical element approximately focused, and adjusting the distance of r movement of the Z axis upward (when the concave surface faces upward) or downward (when the convex surface faces upward) according to the vertex spherical radius r of the aspheric element, namely focusing light on the vertex spherical center of the upper surface of the element (S5);
1-2, after the adjustment of the step 1-1, the image of the cross reticle reflected by the spherical center of the top point of the upper surface of the aspheric optical element can be displayed or blurred or clear on the CCD. Finely adjusting an X, Y axis system to enable the image to be in the center of a view field as much as possible, and then acquiring a roughly focused cross reticle image;
1-3, starting two threads, wherein one thread executes the steps of obtaining ROI: obtaining a highest-grade 200 x 200 ROI on the rough focusing image by taking an SMD2 definition evaluation function as a measuring standard, namely an area where the center of the centering cross reticle is located; the other thread: in order to search the optimal focusing position of the cross reticle, controlling a Z-direction guide rail to respectively collect 1 image every 10um upwards and downwards by taking the coarse focusing position as a starting point, and respectively collecting 13 images upwards and downwards;
1-4, calculating an SMD2 definition evaluation value of the ROI of the image, comparing the SMD2 value of the ROI, taking the maximum value of the SMD 3578 value and comparing the maximum value with a prior definition threshold value, and entering a step 2 if the maximum value is greater than the prior definition threshold value, or entering a step 3 if the maximum value is not greater than the prior definition threshold value;
the ROI of the image is the ROI of the 27 images acquired in the steps 1-2 and 1-3 or the ROI of the image after the deblurring in the step 3;
in the step 1-3, the SMD2 sharpness evaluation function is used as a measurement standard to obtain the highest 200 × 200 ROI on the rough focus image, i.e. the region where the center of the cross reticle is located, in the parallel condition, and the specific operations are as follows:
firstly, creating a structure data structure: storing the coordinates of the upper left corner of the ROI, the width and the height of the ROI area, and creating a mapping table data structure: the real value (value) is the created structure, the key value (key) is the value of the ROI definition evaluation function SMD2 corresponding to the structure, and the expression of SMD2 is as follows:
SMD2=∑xy|f(x,y)-f(x+1,y)|×|f(x,y)-f(x,y+1)| (1)
f (x, y) is the image gray value at pixel coordinates (x, y).
The method comprises the steps of setting the width and the height of a sub-area and searching steps on an input image, calculating SMD2 values of a plurality of sub-areas with specified width (200 pixels) and height (200 pixels) in parallel according to the steps (50 pixels) on the input image according to the set values, and storing the values into a mapping table data structure. And arranging the mapping tables from large to small according to key values, wherein the corresponding sub-area arranged at the head of the mapping table queue is an area (ROI) with the highest definition score on the image, namely the area of the center of the cross reticle.
Because the image acquired in step 1-2 is only the result of rough focusing, and the image with the highest definition in the central area of the cross reticle can be acquired by comparing the image acquired in step 1-2 with the adjacent image, the thread two in step 1-3 acquires 13 images at a step pitch of 10um upwards and then downwards, and 27 images are acquired in total by adding the rough focusing image acquired in step 1-2.
Step 2, extracting the center of the cross reticle by adopting an image processing algorithm, which comprises the following specific operations:
2-1, obtaining a binary image by using an image self-adaptive threshold binarization algorithm for the ROI, wherein the method specifically comprises the following steps:
determining the module size ksize of a single binarized area to be 17, and calculating the Gaussian weighted value T (x, y) of each pixel in the module:
T(x,y)=α·exp[-(i(x,y)-(ksize-1)/2)2/(2·θ)2] (2)
wherein
Figure BDA0002083930400000041
(x, y) is a pixel coordinate value with the center of the single region module as an origin, and θ is 0.3 · [ (ksize-1) · 0.5-1]+0.8, α satisfies Σ T (x, y) 1;
the rule of binarization is shown as follows:
Figure BDA0002083930400000042
dst (x, y) is a target binary image, and src (x, y) is an original ROI image.
2-2, because the size of the ROI image is only 200 × 200, performing 5 × 5 small structural element morphological corrosion operation processing on the binary image to remove small connected domain interference;
2-3, solving the maximum inscribed circle of all connected domains in the ROI image, and solving the center of the maximum inscribed circle, wherein the method specifically comprises the following steps:
firstly, all connected domains in an ROI image are obtained, the maximum inscribed circle is obtained for all the connected domains, the circle center coordinate and the radius value of the inscribed circle are simultaneously stored, the obtained results are sorted from large to small according to the radius value, the circle with the largest radius is the inscribed circle of the center of the cross reticle, the circle center coordinate can be approximated to the center coordinate of the cross reticle, and the Euclidean distance deviation between the circle center coordinate of the inscribed circle and the center coordinate of the cross reticle is verified to be within 4 pixels through experiments.
Selecting an image deblurring algorithm according to actual requirements in the step 3, specifically operating as follows:
3-1, judging that the current entering is the deblurring algorithm for the second time, and if the current entering is the 1 st entering, directly jumping to the step 3-2 or the step 3-3 for deblurring; if the resolution is more than 1, the ROI definition threshold after the 1 st deblurring processing still does not reach the prior definition threshold, and for the two deblurring algorithms, repeated operation cannot effectively improve the definition but consumes unnecessary time, at the moment, the program is ended, and a warning that the sample to be detected exceeds the detection limit of the centering system is output;
the step 3-2 is rapid deblurring, the relative speed is high, and the step 3-3 is relatively more accurate;
3-2, using a trained IRCNN deep learning model to deblur the whole image with the highest SMD2, and comprising the following steps:
under a replacement objective lens with the focal length of 100mm, 27 images of 7 rotationally symmetric aspheric optical elements are respectively collected according to steps 1-1 to 1-3, and the total number is 7 × 27 to 189. In order to expand the training sample set, 5 times of brightness change within a limited range and 6 times of rotation change within the limited range are carried out on each image obtained by acquisition, and finally the size of the obtained training sample is 7 × 27 × (1+5 × 6) ═ 5859. The deep learning model, Image Restoration Convolutional Neural Network (IRCNN), was then used to train 160000 iteration cycles, storing the final model.
The out-of-focus degradation model of an image can be expressed as: and Y is H.X + V, H is a blurring operator, V is additive noise, X is an ideal in-focus image, and Y is a defocused degraded image. Estimation of X from Bayesian probability analysis
Figure BDA0002083930400000051
The solution can be solved by a maximum a posteriori estimation (MAP) problem:
Figure BDA0002083930400000052
the first half of equation (4) is a fidelity term, and the second half is a regularization term, and is generally solved by an optimization method based on an image degradation model (Y ═ H · X + V) or a discriminant learning method (deep learning), where the purpose of the discriminant learning method is to obtain a priori parameters through iterative trainingThe number Θ. IRCNN combines the advantages of both methods by means of a semi-quadratic splitting method (HQS), determining an objective function as:
Figure BDA0002083930400000053
aiming at training a deblurring device, a model comprises seven convolutional layers, and the structure of each convolutional layer is as follows:
layer 1: 3 × 3 dilated convolution with a hole of 1+ modified linear activation function (ReLU);
2 nd to 6 th layers: the holes are respectively 2, 3, 4, 3 and 2 of 3 × 3 expansion convolution + batch normalization + ReLU;
layer 7: 3 x 3 expansion convolution with hole 1;
wherein the functional form of ReLU is:
Figure BDA0002083930400000054
the test result shows that the numerical value of the sharpness evaluation function SMD2 of the deblurred image is improved by nearly one order of magnitude, and the processing speed reaches 20fps on a double-core four-thread Interi3-7100cpu with the main frequency of 3.9GHz, namely, only 50ms is needed for deblurring a single ROI image.
3-3. deblurring the whole image with the highest SMD2 by using a modified 'dark channel prior based defogging algorithm', which is as follows:
the original dark channel defogging algorithm is established on the basis of a three-channel RGB image, and is improved, so that the gray image defogging algorithm has a relatively excellent defogging effect on the gray image. The fast guiding filtering algorithm with lower time complexity is used for replacing the guiding filtering algorithm in the original algorithm, and the mathematical expressions of the guiding filtering algorithm are shown in the following (5) to (8):
meanI=fmean(I),meanp=fmean(p),corrI=fmean(I.*I),corrIp=fmean(I.*p) (5)
i is a guide map, p is a filtered input image, fmeanIs an average filter with a window radius r (r is 8 in the program), namely meanI、meanpThe results of the I and p mean filtering, corr, respectivelyIAs a result of the autocorrelation of I, corrIpThe result of the cross-correlation is found for I and p.
varI=corrI-meanI.*meanI,covIp=corrIp-meanI.*meanp (6)
varIIs the variance of I, covIpIs the covariance of I and p.
a=covIp./(varI+δ),b=meanp-a.*meanI (7)
δ is the regularization parameter.
meana=fmean(a),meanb=fmean(a),q=meana.*I+meanb (8)
q is the output result of the guided filtering, f in the guided filtering algorithmmeanHas a time complexity of O (N) and needs to calculate f a plurality of timesmeanTherefore, the algorithm speed is relatively slow, so the complexity of the use time is considered to be O (N/s)2) Where s is the scaling of the image, and 4 is taken in the program, the mathematical expressions of the fast-oriented filtering are as follows (9) to (13):
I'=fsubsample(I,s),p'=fsubsample(p,s),r'=r/s (9)
i ' is the guide map downsampling (scaling) result, p ' is the filtered input image downsampling (scaling) result, and r ' is the mean filter window radius equivalent to r after downsampling.
meanI'=fmean(I',r'),meanp'=fmean(p',r'),corrI'=fmean(I'.*I',r'), corrI'p'=fmean(I'.*p',r') (10)
meanI'、meanp'The result of the I 'and p' mean filtering, corr, respectivelyI'As a result of the autocorrelation of I', corrI'p'The result of cross-correlation is found for I 'and p'.
varI'=corrI'-meanI'.*meanI',covI'p'=corrI'p'-meanI'.*meanp' (11)
varI'Is the variance of I', covI'p'Is the covariance of I 'and p'.
a=covI'p'./(varI'+δ),b=meanp'-a.*meanI' (12)
δ is the regularization parameter.
Figure BDA0002083930400000071
q' is the output result of the fast oriented filtering, and the essence of the fast oriented filtering is that the pixel number of the operation is reduced by downsampling so as to reduce the time complexity, and the mean is calculatedaAnd meanbThen, the up-sampling is carried out to restore the image size. The average time spent on processing a single image (image resolution 3296 × 2472) by using guided filtering was 2.6s, and the time spent on fast guided filtering was 1.0 s. In most cases, the SMD2 value of the image ROI obtained by the 'dark channel prior defogging algorithm' is higher than that of the 'IRCNN deep learning model', but the processing time consumption is higher by one order of magnitude, and the method is generally only used in the case of serious defocus blur.
The invention has the following beneficial effects:
the method for extracting the center of the image in the out-of-focus blur centering of the rotationally symmetric aspheric optical element mainly solves the problem that the center of a cross reticle image is difficult to extract due to out-of-focus blur of a centering cross reticle image caused by the fact that the normal aberration of the aspheric optical element is larger than the depth of field of a centering system when the rotationally symmetric aspheric optical element is centered. The measurable range of the centering system for the rotationally symmetric aspheric optical element is greatly improved by using an IRCNN-based deep learning deblurring model or an improved dark channel prior defogging algorithm. As the algorithm applies multithreading and parallel technology in a large number, fragmentary time slices are fully utilized, the ROI is extracted in advance, the size of an image needing to be processed is reduced, only 200X 200 ROI needs to be operated and processed, and the centering efficiency is remarkably improved.
Drawings
Fig. 1 shows a schematic configuration of the centering system.
FIG. 2A shows a centered cross reticle image of a rotationally symmetric ellipsoid (the CONIC coefficient of the ellipsoid is-1) with a vertex spherical radius of 18.281mm under a displacement objective with a focal length of 100mm in the best-focus state of the algorithm.
FIG. 2B shows a centered cross reticle image with a spherical optical element with a vertex spherical radius of 38.79mm in the best focus state of the algorithm with a replacement objective lens with a focal length of 100 mm.
FIG. 3 is a graph showing the relationship between the vertex spherical radius and the normal aberration of a rotationally symmetric ellipsoidal quadric optical element; and centering the relationship between the depth of field and the normal aberration of the system with a fixed replacement objective focal length.
FIG. 4 is a flowchart of an image extraction center algorithm in the defocus blur centering of the aspheric optical element.
FIG. 5 shows the SMD2 sharpness evaluation curves for ROIs of 27 images acquired through step 1-2 and step 1-3 for a rotationally symmetric ellipsoidal quadric optical element with a vertex sphere radius of 8.8 mm.
Fig. 6A shows a 3 x 3 dilated convolution kernel with a hole of 1.
Fig. 6B shows the 3 x 3 dilated convolution kernel with a hole of 2.
FIG. 7A shows the highest value of the SMD2 sharpness rating function for the ROI in 27 images.
FIG. 7B shows the result of the image with the highest value of SMD2 sharpness evaluation function of ROI in FIG. 7A deblurred by IRCNN deep learning model.
FIG. 7C shows the result of the improved dark channel defogging algorithm deblurring the image with the highest value of the SMD2 sharpness evaluation function of the ROI in FIG. 7A.
FIG. 8A shows the result of step 2-1 adaptive thresholding.
FIG. 8B shows the result of extracting the center of the cross reticle in step 2-3.
Detailed Description
The invention is further illustrated by the following figures and examples.
Example 1
Fig. 1 shows a schematic configuration of the centering system, in which the Y-axis (S7) is placed on a marble table (S8), and the replacement objective lens (S3) and the CCD (S1) are screwed to the centering unit (S2). During initialization, the X axis (S6) and the Y axis (S7) are controlled to move the rotational symmetry aspheric element (S4) to the position right below the replacement objective lens (S3), then the Z axis (S10) is adjusted to enable the distance d between the replacement objective lens (S3) and the rotational symmetry aspheric optical element (S4) to be equal to the focal length f of the replacement objective lens (S3), at the moment, an image of the cross reticle is formed by the upper surface of the rotational symmetry aspheric optical element (S4), then the Z axis (S10) is adjusted according to the vertex spherical radius r (the convex surface is upward and the concave surface is upward and negative) of the rotational symmetry aspheric optical element (S4), and the distance d between the replacement objective lens (S3) and the rotational symmetry aspheric optical element (S4) is equal to f-r, and the cross reticle image (S9) formed by the cross reticle through the vertex spherical center (S5) is obtained.
Unlike spherical optical elements, aspheric optical elements have normal aberration in the depth of field direction, when the normal aberration is greater than the depth of field of the system, the image of the centering cross reticle will generate defocusing blur on the CCD imaging surface, and due to energy divergence, the brightness of the image of the centering cross reticle will be reduced, while the background gray will be raised, which is very unfavorable for the center extraction of the image of the centering cross reticle, as shown in fig. 2A; since there is no normal aberration in the spherical element, theoretically there is always a clear in-focus image, as shown in fig. 2B.
FIG. 3 is a graph showing the relationship between the vertex spherical radius and the normal aberration of a rotationally symmetric ellipsoidal quadric optical element; and centering the relationship between the depth of field and the normal aberration of the system with a fixed replacement objective focal length. As can be seen from FIG. 3, when the replacement objective lenses with the focal lengths of 50mm and 100mm are used, the non-defocus imaging range of the rotationally symmetric ellipsoid quadric optical element is very narrow, the corresponding non-defocus imaging limit vertex spherical radii are 1.9996mm and 5.9986mm respectively, the defocus degree becomes more obvious along with the increase of the vertex spherical radius, and a defocus blur limit which can be processed by the traditional machine vision algorithm exists. The focal length of the used replacement objective lens is inversely proportional to the system magnification, and the smaller the focal length is, the larger the system magnification is, and the more favorable the high-precision centering is. One of the objectives of the present invention is to increase this limit value to make the measurable range of the centering system for aspheric elements wider.
FIG. 4 is a flowchart of an image extraction center algorithm in the out-of-focus blur centering of the rotationally symmetric aspheric optical element. Firstly, initializing an aspheric surface centering unit, replacing the focal length of an objective lens according to the spherical radius of the top surface of the aspheric optical element to be measured and the centering unit, controlling an X, Y axis system and a Z axis to enable the spherical center of the top surface of the aspheric optical element to be approximately focused, and then acquiring a rough focusing centering image. After the image is collected, software automatically starts two threads, one thread takes an SMD2 definition evaluation function as a measurement reference for the collected image, extracts an ROI (region of interest) with the highest definition of 200 x 200 in the image, namely the region of the center of the cross reticle, and stores the coordinate information of the ROI, and the process adopts software parallel, so that the search efficiency is greatly improved; since the acquired images do not necessarily correspond to the optimal focusing state of the system, the second thread controls the Z-direction guide rail to respectively acquire 13 images at a step pitch of 10um upwards and downwards. Then, under a parallel environment, the SMD2 definition evaluation is performed on 26 newly acquired images in the same ROI, and the image corresponding to the maximum value of the 27(1 coarsely focused image +26 later acquired images) definition values is regarded as the best focused image under the limited condition. FIG. 5 shows the SMD2 sharpness evaluation curves for ROIs of 27 images acquired through step 1-2 and step 1-3 for a rotationally symmetric ellipsoidal quadric optical element with a vertex spherical radius of 8.8 mm.
Comparing the maximum value of the definition evaluation function with a set prior definition threshold value of 0.35, if the maximum value is smaller than the threshold value, performing deblurring processing by using IRCNN deep learning or using an improved dark channel-based prior defogging algorithm, and recalculating an SMD2 value on the deblurred image, wherein the figures 6A and 6B are 3 x 3 expansion convolution kernels with cavities of 1 and 2 respectively, and the decimal numbers displayed on figures 7A, 7B and 7C are an SMD2 value of an original image ROI, an SMD2 value after deblurring by an IRCNN ROI deep learning model and an SMD2 value of an improved dark channel defogging algorithm, and the deblurred SMD2 value is improved by one order of magnitude compared with the original image; the four integers in parentheses shown are the number of columns in the top left corner of the ROI, the number of rows in the top left corner of the ROI, the ROI width and the ROI height, respectively. If the clarity value of the deblurred SMD2 is still smaller than the clarity threshold value, outputting a warning that the sample to be detected exceeds the detection limit of the centering system; and if the maximum value of the definition evaluation function or the numerical value of the deblurred definition evaluation function is larger than or equal to the threshold value, performing binarization processing on the ROI. Because the background and foreground gray levels of the ROI image are not uniform, the closer the ROI image is to the center of a cross reticle image (foreground), the higher the image gray level (including the foreground and the background) is, the farther the ROI image is from the center, the lower the image gray level is, and thus the background gray level at the center is possibly higher than the foreground gray level at the boundary, and therefore, a binarization threshold value needs to be selected according to different regions, namely adaptive threshold value binarization. Since the ROI itself is only 200 × 200 in size, small connected domains are removed using a morphological etching operation of 5 × 5 small structural elements. And finally, solving the maximum inscribed circle of all connected domains (small connected domains possibly exist after corrosion operation), wherein the inscribed circle with the largest radius is the inscribed circle of the center of the cross reticle, the center of the maximum inscribed circle is approximately considered to be the center of the cross reticle, and the Euclidean distance deviation between the center coordinate of the cross reticle extracted by software and the center coordinate of the actual cross reticle is not more than 4 pixels through experimental verification. FIGS. 8A and 8B are the result of adaptive thresholding in step 2-1 and the result of extracting the center of the cross reticle in step 2-3, respectively.
The management and utilization of the time segments are greatly optimized by using multithreading and parallel technologies in the algorithm, and the algorithm can automatically and accurately extract the central point of the defocused fuzzy centering cross reticle image at a higher speed.

Claims (7)

1. The method for automatically extracting the center of the image in the out-of-focus fuzzy centering of the aspheric optical element is characterized by comprising the following steps of:
step 1, obtaining an ROI (region of interest) of an optimal focusing image by a centering system theory, judging a numerical value according to a gray variance SMD2 definition evaluation function of the ROI, and entering step 2 if the SMD2 definition numerical value of the ROI of the input image is larger than or equal to a priori definition threshold value; if the SMD2 definition value of the ROI of the input image is smaller than the prior definition threshold, entering step 3;
step 2, extracting the center of the cross reticle by adopting an image processing algorithm, and ending;
step 3, selecting an image deblurring algorithm according to requirements, then calculating the SMD2 definition value of the deblurred image ROI, if the SMD2 definition value is still smaller than the prior definition threshold value, quitting and outputting warning information, and if not, returning to the step 2;
the specific operation of the step 1 is as follows:
1-1, adjusting X (S6) and Y (S7) axes to move the rotation symmetry aspheric optical element (S4) to an initial position, adjusting a Z-direction guide rail (S10) according to a focal length f of a replacement objective lens (S3) to enable the upper surface of the rotation symmetry aspheric optical element to be approximately focused, and adjusting the distance of the upward or downward movement r of the Z axis according to the vertex spherical radius r of the aspheric element, namely, the light is focused on the vertex spherical center of the upper surface of the element (S5);
1-2, after the adjustment in the step 1-1, an image of a cross reticle reflected by the spherical center of the top point of the upper surface of the aspheric optical element is displayed or blurred or clear on the CCD; finely adjusting an X, Y axis system to enable the image to be in the center of a view field as much as possible, and then acquiring a roughly focused cross reticle image;
1-3, starting two threads, wherein one thread executes the steps of obtaining ROI: obtaining a highest-grade 200 x 200 ROI on the rough focusing image by taking an SMD2 definition evaluation function as a measuring standard, namely an area where the center of the centering cross reticle is located; the other thread: in order to search the optimal focusing position of the cross reticle, controlling a Z-direction guide rail to respectively collect 1 image every 10um upwards and downwards by taking the coarse focusing position as a starting point, and respectively collecting 13 images upwards and downwards;
1-4, calculating an SMD2 definition evaluation value for the ROI of the image, comparing the SMD2 value of the ROI, taking the maximum value of the SMD 3578 value and comparing the maximum value with a priori definition threshold, and if the SMD2 value is larger than the priori definition threshold, entering a step 2, otherwise, entering a step 3.
2. The method for automatically extracting the center of an image in the out-of-focus blur centering of an aspheric optical element as claimed in claim 1, wherein the ROI of the image of the step 1-4 is the ROI of the 27 images acquired in the steps 1-2 and 1-3 or the ROI of the image after the step 3 deblurring.
3. The method for automatically extracting the center of an image in an out-of-focus blur centering of an aspheric optical element as claimed in claim 1 or 2, wherein the step 1-3 of obtaining the highest-scoring 200 × 200 ROI on the rough in-focus image, i.e. the region where the center of the centering cross reticle is located, by using SMD2 sharpness evaluation function as a measure under a parallel condition is as follows:
firstly, creating a structure data structure: storing the coordinates of the upper left corner of the ROI, the width and the height of the ROI area, and creating a mapping table data structure: the real value is the created structural body, the key value is the value of the ROI definition evaluation function SMD2 corresponding to the structural body, and the expression of SMD2 is as follows:
SMD2=∑xy|f(x,y)-f(x+1,y)|×|f(x,y)-f(x,y+1)| (1)
f (x, y) is the image gray value at pixel coordinate (x, y);
setting the width and height of a sub-region and a searching step on an input image, according to the set value, calculating SMD2 values of a plurality of sub-regions with specified width and height in parallel on the input image according to the step, and storing the values into a mapping table data structure; and arranging the mapping tables from large to small according to key values, wherein the corresponding sub-area arranged at the head of the mapping table queue is the area ROI with the highest definition score on the image, namely the area of the center of the cross reticle.
4. The method for automatically extracting the center of an image in the defocus blur of an aspheric optical element according to claim 3, wherein the step 2 of extracting the center of the cross reticle by using an image processing algorithm specifically comprises the following operations:
2-1, obtaining a binary image by using an image self-adaptive threshold binarization algorithm for the ROI, wherein the method specifically comprises the following steps:
determining the module size ksize of a single binarized area to be 17, and calculating the Gaussian weighted value T (x, y) of each pixel in the module:
T(x,y)=α·exp[-(i(x,y)-(ksize-1)/2)2/(2·θ)2] (2)
wherein
Figure FDA0003150179880000021
For the pixel coordinate values with the center of the single region module as the origin, θ is 0.3 · [ (ksize-1) · 0.5-1]+0.8, α satisfies
Figure FDA0003150179880000031
The rule of binarization is shown as follows:
Figure FDA0003150179880000032
dst (x, y) is a target binary image, src (x, y) is an original ROI image;
2-2, because the size of the ROI image is only 200 × 200, performing 5 × 5 small structural element morphological corrosion operation processing on the binary image to remove small connected domain interference;
2-3, solving the maximum inscribed circle of all connected domains in the ROI image, and solving the center of the maximum inscribed circle, wherein the method specifically comprises the following steps:
firstly, all connected domains in an ROI image are obtained, the maximum inscribed circle is obtained for all the connected domains, the circle center coordinate and the radius value of the inscribed circle are simultaneously stored, the obtained results are sorted from large to small according to the radius value, the circle with the largest radius is the inscribed circle of the center of the cross reticle, the circle center coordinate can be approximated to the center coordinate of the cross reticle, and the Euclidean distance deviation between the circle center coordinate of the inscribed circle and the center coordinate of the cross reticle is verified to be within 4 pixels through experiments.
5. The method for automatically extracting the center of an image in the out-of-focus blur centering of the aspheric optical element according to claim 4, wherein the step 3 of selecting an image deblurring algorithm according to actual requirements specifically comprises the following operations:
3-1, judging that the current time is the entering of the deblurring algorithm for the second time, and if the time is the entering of the 1 st time, directly performing deblurring processing; if the detection value is larger than 1, the detection is finished, and a warning that the sample to be detected exceeds the detection limit of the centering system is output.
6. The method for automatically extracting the center of an image in an out-of-focus blur center of an aspheric optical element as claimed in claim 5, wherein the blur processing in step 3-1 is to deblur the whole image with the highest SMD2 by using a trained "IRCNN deep learning model", specifically as follows:
under a replacement objective lens with the focal length of 100mm, 27 images of 7 rotationally symmetric aspheric optical elements are respectively collected according to the steps from 1-1 to 1-3, and the total number of 7 multiplied by 27 is 189; in order to expand a training sample set, brightness change within a limited range is carried out for 5 times and rotation change within the limited range is carried out for 6 times on each image obtained by acquisition, and finally the size of the obtained training sample is 7 multiplied by 27(1 +5 multiplied by 6) ═ 5859; then, a deep learning model-image restoration convolution neural network is used for training 160000 iteration cycles, and a final model is stored;
the out-of-focus degradation model of an image can be expressed as: y is H.X + V, H is a fuzzy operator, V is additive noise, X is an ideal in-focus image, and Y is a defocused degraded image; estimation of X from Bayesian probability analysis
Figure FDA0003150179880000041
Solving by maximum a posteriori estimation problem:
Figure FDA0003150179880000042
the first half part of the formula (4) is a fidelity term, the second half part is a regular term, and the solution is solved by using an optimization method or a discriminant learning method based on an image degradation model Y-H.X + V, wherein the discriminant learning method aims to obtain a priori parameter theta through iterative training; IRCNN combines the advantages of the two methods by means of a semi-quadratic splitting method, and determines an objective function as follows:
Figure FDA0003150179880000043
Figure FDA0003150179880000044
aiming at training a deblurring device, a model comprises seven convolutional layers, and the structure of each convolutional layer is as follows:
layer 1: 3 × 3 expansion convolution with a hole of 1+ modified linear activation function;
2 nd to 6 th layers: the holes are respectively 2, 3, 4, 3 and 2 of 3 × 3 expansion convolution + batch normalization + ReLU;
layer 7: 3 x 3 expansion convolution with hole 1;
wherein the functional form of ReLU is:
Figure FDA0003150179880000045
the test result shows that the numerical value of the sharpness evaluation function SMD2 of the deblurred image is improved by nearly one order of magnitude, and the processing speed reaches 20fps on a double-core four-thread Inter i3-7100cpu with the main frequency of 3.9GHz, namely, only 50ms is needed for deblurring a single ROI image.
7. The method for automatically extracting the center of an image in the out-of-focus blur centering of an aspheric optical element as claimed in claim 5, wherein the blur processing in step 3-1 is to deblur the whole image with the highest SMD2 by using a modified "dark channel prior defogging algorithm", specifically as follows:
the fast guided filtering algorithm with lower time complexity is used for replacing the guided filtering algorithm in the dark channel prior defogging algorithm, and the mathematical expressions of the guided filtering algorithm are shown in the following (5) to (8):
meanI=fmean(I),meanp=fmean(p),corrI=fmean(I.*I),corrIp=fmean(I.*p) (5)
i is a guide map, p is a filtered input image, fmeanIs an average filter with a window radius r, meanI、meanpThe results of the I and p mean filtering, corr, respectivelyIAs a result of the autocorrelation of I, corrIpObtaining the cross-correlation result for I and p;
varI=corrI-meanI.*meanI,covIp=corrIp-meanI.*meanp (6)
varIis the variance of I, covIpCovariance as I and p;
a=covIp./(varI+δ),b=meanp-a.*meanI (7)
delta is a regularization parameter;
meana=fmean(a),meanb=fmean(a),q=meana.*I+meanb (8)
q is the output result of the guided filtering, f in the guided filtering algorithmmeanHas a time complexity of O (N) and needs to calculate f a plurality of timesmeanTherefore, the algorithm speed is relatively slow, so the complexity of the use time is considered to be O (N/s)2) Where s is the scaling of the image, and 4 is taken in the program, the mathematical expressions of the fast-oriented filtering are as follows (9) to (13):
I'=fsubsample(I,s),p'=fsubsample(p,s),r'=r/s (9)
i ' is a guide map downsampling result, p ' is a filter input image downsampling result, and r ' is a mean value filter window radius which is equivalent to r after downsampling;
meanI'=fmean(I',r'),meanp'=fmean(p',r'),corrI'=fmean(I'.*I',r'),corrI'p'=fmean(I'.*p',r')(10)
meanI'、meanp'the result of the I 'and p' mean filtering, corr, respectivelyI'As a result of the autocorrelation of I', corrI'p'Obtaining the cross-correlation result for I 'and p';
varI'=corrI'-meanI'.*meanI',covI'p'=corrI'p'-meanI'.*meanp' (11)
varI'is the variance of I', covI'p'Covariance as I' and p;
a=covI'p'./(varI'+δ),b=meanp'-a.*meanI' (12)
Delta is a regularization parameter;
Figure FDA0003150179880000061
q' is the output result of the fast oriented filtering, and the essence of the fast oriented filtering is that the pixel number of the operation is reduced by downsampling so as to reduce the time complexity, and the mean is calculatedaAnd meanbThen, the up-sampling is carried out to restore the image size.
CN201910481302.8A 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element Active CN110428463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910481302.8A CN110428463B (en) 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910481302.8A CN110428463B (en) 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element

Publications (2)

Publication Number Publication Date
CN110428463A CN110428463A (en) 2019-11-08
CN110428463B true CN110428463B (en) 2021-09-14

Family

ID=68408426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910481302.8A Active CN110428463B (en) 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element

Country Status (1)

Country Link
CN (1) CN110428463B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429422A (en) * 2020-03-19 2020-07-17 中国工程物理研究院激光聚变研究中心 Laser near-field state analysis method and device based on deep learning
CN112285876A (en) * 2020-11-04 2021-01-29 邱妙娜 Camera automatic focusing method based on image processing and bubble detection
CN114422660A (en) * 2021-12-06 2022-04-29 江苏航天大为科技股份有限公司 Imaging focusing system of designated monitoring area
CN114581857B (en) * 2022-05-06 2022-07-05 武汉兑鑫科技实业有限公司 Intelligent crown block control method based on image analysis
CN115393352A (en) * 2022-10-27 2022-11-25 浙江托普云农科技股份有限公司 Crop included angle measuring method based on image recognition and application thereof
CN116071657B (en) * 2023-03-07 2023-07-25 青岛旭华建设集团有限公司 Intelligent early warning system for building construction video monitoring big data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1468314A2 (en) * 2001-12-18 2004-10-20 University Of Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
TW200844869A (en) * 2007-05-04 2008-11-16 Univ Nat Central A fabrication method and the structure of prism lens capable of adjusting optical path
CN103776389A (en) * 2014-01-10 2014-05-07 浙江大学 High-precision aspheric combined interference detection device and high-precision aspheric combined interference detection method
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN109544462A (en) * 2018-09-28 2019-03-29 北京交通大学 License plate image deblurring method based on adaptively selected fuzzy core
CN109632264A (en) * 2018-12-29 2019-04-16 中国科学院西安光学精密机械研究所 A kind of detection device and method of photographic device environmental test stability
CN109669264A (en) * 2019-01-08 2019-04-23 哈尔滨理工大学 Self-adapting automatic focus method based on shade of gray value

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9912848B2 (en) * 2015-01-21 2018-03-06 Siemens Energy, Inc. Method and apparatus for turbine internal visual inspection with foveated optical head and dual image display
US9772465B2 (en) * 2015-06-04 2017-09-26 Qualcomm Incorporated Methods and devices for thin camera focusing alignment
CN105389820A (en) * 2015-11-18 2016-03-09 成都中昊英孚科技有限公司 Infrared image definition evaluating method based on cepstrum
CN107339955B (en) * 2017-01-07 2020-11-13 深圳市灿锐科技有限公司 High-precision lens center deviation detection instrument and measurement method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1468314A2 (en) * 2001-12-18 2004-10-20 University Of Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
TW200844869A (en) * 2007-05-04 2008-11-16 Univ Nat Central A fabrication method and the structure of prism lens capable of adjusting optical path
CN103776389A (en) * 2014-01-10 2014-05-07 浙江大学 High-precision aspheric combined interference detection device and high-precision aspheric combined interference detection method
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN109544462A (en) * 2018-09-28 2019-03-29 北京交通大学 License plate image deblurring method based on adaptively selected fuzzy core
CN109632264A (en) * 2018-12-29 2019-04-16 中国科学院西安光学精密机械研究所 A kind of detection device and method of photographic device environmental test stability
CN109669264A (en) * 2019-01-08 2019-04-23 哈尔滨理工大学 Self-adapting automatic focus method based on shade of gray value

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Complicated intermittent scratches detection research on surface of optical components based on adaptive sector scanning algorithm cascading mean variance threshold algorithm;Fanyi Wang;《10th International Symposium on Precision Engineering Measurements and Instrumentation (ISPEMI 2018)》;20190307;第110531L-(1-8)页 *
Learning Deep CNN Denoiser Prior for Image Restoration;Kai Zhang et al.;《arXiv》;20170411;第1-11页 *
基于显微视觉的逆反射标志中心定位方法研究;李艳 等;《计测技术》;20181028;第38卷(第5期);第48-55页 *
采用快速导向滤波的暗通道先验去雾算法的研究与实现;王浩然;《数字技术与应用》;20151231(第11期);第122-123,127页 *

Also Published As

Publication number Publication date
CN110428463A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110428463B (en) Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element
CN109785245B (en) Light spot image trimming method
CN111462075B (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy region
CN102780847A (en) Camera automatic focusing control method focused on moving target
EP2584310A1 (en) Image processing device, and image processing method
CN116990323B (en) High-precision printing plate visual detection system
Pertuz et al. Reliability measure for shape-from-focus
CN109064505A (en) A kind of depth estimation method extracted based on sliding window tensor
CN102542535B (en) Method for deblurring iris image
CN110427979A (en) Road puddle recognition methods based on K-Means clustering algorithm
Malik et al. A Fuzzy-Neural approach for estimation of depth map using focus
CN113781413B (en) Electrolytic capacitor positioning method based on Hough gradient method
CN110648302A (en) Light field full-focus image fusion method based on edge enhancement guide filtering
CN117197108A (en) Optical zoom image quality evaluation method, system, computer device and medium
TWI521295B (en) Bevel-axial auto-focus microscopic system and method thereof
CN105913418B (en) A kind of Pupil Segmentation method based on multi-threshold
CN116579958A (en) Multi-focus image fusion method of depth neural network guided by regional difference priori
CN108830804B (en) Virtual-real fusion fuzzy consistency processing method based on line spread function standard deviation
CN112508828A (en) Multi-focus image fusion method based on sparse representation and guided filtering
CN116363064A (en) Defect identification method and device integrating target detection model and image segmentation model
Karakaya et al. An iris segmentation algorithm based on edge orientation for off-angle iris recognition
CN115760893A (en) Single droplet particle size and speed measuring method based on nuclear correlation filtering algorithm
Ho et al. AF-Net: A convolutional neural network approach to phase detection autofocus
Wang et al. Shape-from-focus reconstruction using block processing followed by local heat-diffusion-based refinement
Hui et al. An improved focusing algorithm based on image definition evaluation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant