CN110428463A - The method that image automatically extracts center during aspherical optical element defocus blur is fixed - Google Patents

The method that image automatically extracts center during aspherical optical element defocus blur is fixed Download PDF

Info

Publication number
CN110428463A
CN110428463A CN201910481302.8A CN201910481302A CN110428463A CN 110428463 A CN110428463 A CN 110428463A CN 201910481302 A CN201910481302 A CN 201910481302A CN 110428463 A CN110428463 A CN 110428463A
Authority
CN
China
Prior art keywords
image
mean
roi
smd2
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910481302.8A
Other languages
Chinese (zh)
Other versions
CN110428463B (en
Inventor
杨甬英
王凡祎
楼伟民
白剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910481302.8A priority Critical patent/CN110428463B/en
Publication of CN110428463A publication Critical patent/CN110428463A/en
Application granted granted Critical
Publication of CN110428463B publication Critical patent/CN110428463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses the methods that a kind of fixed middle image of aspherical optical element defocus blur automatically extracts center.The present invention uses gray variance sharpness evaluation function as measurement standard, first obtains the highest area-of-interest of clarity in thick focus image, i.e. cross-graduation plate center region;The SMD2 numerical value soprano of ROI is selected from the image that the step pitches such as odd number acquire again, if its numerical value is less than priori threshold value, then using trained IRCNN deep learning model or based on improvement dark channel prior algorithm, deblurring processing is carried out to the image most preferably focused, then adaptive threshold binaryzation and morphological operation are used, the connected domain for obtaining cross-graduation plate, seeks its maximum inscribed circle, and the inscribed circle center of circle is approximately cross-graduation plate center.The present invention solves the problems, such as that the fixed middle cross-graduation plate image of aspherical optical element generates defocus blur so that being difficult to the center of extracting because aspherical normal aberration is greater than the system depth of field.

Description

The method that image automatically extracts center during aspherical optical element defocus blur is fixed
Technical field
The invention belongs to mechanical vision inspection technology fields, and in particular to one kind is applied to rotational symmetric aspheric optics member The method that image automatically extracts center during part defocus blur is fixed.
Background technique
Aspherical optical element quilt in the systems such as heavy caliber space telescope, inertial confinement fusion (ICF), superlaser It is widely applied, due to the uncontrollable power during manufacturing, element surface inevitably existing defects, these defects are not But it will affect optical system imaging quality, unnecessary scattering can be also generated in high energy laser system with diffraction to cause Energy loss is likely to cause secondary damage in turn, it is therefore desirable to carry out the detection and assessment of surface defect to it, and detect Before non-spherical element must be determined, keep its optical axis as conllinear with detection system optical axis as possible.Process needs basis in fixed The center position shaft of cross-graduation plate image is adjusted in fixed, is different from spherical optics element, aspherics member There are normal aberrations along optical axis direction for part, and with the increase of sample aspherical degree and the vertex radius of a ball, normal aberration Will increase, and for determining pre-objective, the depth of field of centering system be it is fixed, when aspherical optical element normal aberration is big When the centering system depth of field, imaging will surmount the resolution capability of adjacent picture elements on CCD, and image can generate defocus at this time It is fuzzy, and since energy dissipates, fixed middle cross-graduation plate brightness of image can reduce, and background gray scale can then increase, very not It is extracted conducive to surely middle cross-graduation plate picture centre.Under system depth of field rigid condition, normal aberration is bigger, and CCD is collected Gray variance (SMD2) sharpness evaluation function numerical value of cross-graduation plate image is lower in fixed, when being lower than certain threshold value, just The centre coordinate for obtaining cross-graduation plate image cannot be extracted by traditional images Processing Algorithm.It is lifting system as far as possible to non- The measurement range of spherical optics element, based on " IRCNN deep learning deblurring model " and " the dark channel prior defogging of improvement is calculated Method " proposes the aspherical optical element defocus blur that is applied to that is a kind of flexible and can artificially selecting according to specific requirements and determines middle figure Method as automatically extracting center.
Summary of the invention
Present invention aim to address the fixed middle cross-graduation plate images of rotational symmetric aspheric optical element because aspherical Normal aberration is greater than the system depth of field and leads to the problem of defocus blur so that being difficult to the center of extracting.It proposes a kind of applied to rotation The method at image zooming-out center during symmetric aspheres optical element defocus blur is fixed.Specifically comprise the following steps:
The technical solution adopted by the present invention to solve the technical problems includes the following steps:
The region of interest ROI of the theoretical best focus image of step 1. centering system obtains, then according to the gray scale side of ROI Poor SMD2 sharpness evaluation function logarithm is judged, if the SMD2 clarity numerical value of the ROI of input picture is more than or equal to Priori clarity threshold, enters step 2;If the SMD2 clarity numerical value of the ROI of input picture is less than priori clarity threshold, Then enter step 3;
Step 2. extracts cross-graduation plate center using image processing algorithm, and algorithm terminates;
Step 3. selects image deblurring method according to actual needs, then calculates the SMD2 of image ROI after deblurring again Clarity numerical value, if exiting the program still less than priori clarity threshold and exporting warning message, otherwise return step 2.
Step 1 concrete operations are as follows:
1-1. adjusts X (S6), Y (S7) axis and rotational symmetric aspheric optical element (S4) is moved to initial position, according to It replaces object lens (S3) focal length f and adjusts Z-direction guide rail (S10), rotational symmetric aspheric optical element upper surface is made substantially to focus, according to The vertex radius of a ball r of non-spherical element adjusts the distance of Z axis movement r upwards (concave surface upward when) or downwards (convex surface upward when), Even if light focusing is in the member top surface apex sphere centre of sphere (S5);
1-2. is after step 1-1 adjustment, and CCD is upper to be presented or obscure or clearly cross-graduation plate is first through aspherics The picture of part upper surface apex sphere centre of sphere reflection.X, Y-axis system are finely tuned, is made as in visual field center, it is slightly right then to acquire a width as far as possible Burnt cross-graduation plate image;
1-3. opens dual-thread, wherein a thread executes and obtains ROI: using SMD2 sharpness evaluation function as measurement standard The ROI for the highest 200*200 that scores on thick focus image is obtained, i.e., fixed middle cross-graduation plate center region;Another line Journey: focusing optimal position to search for cross-graduation plate, controls Z-direction guide rail using thick focusing position as starting point, respectively upwards, 1 width image is acquired every 10um downwards, respectively acquires 13 width images up and down;
1-4. calculates SMD2 clarity figure of merit to the ROI of image, compares the SMD2 numerical value of ROI, takes wherein maximum value It is compared with priori clarity threshold, then enters step 2 if it is greater than priori clarity threshold, otherwise enter step 3;
The ROI of the image is the ROI of collected 27 images of step 1-2 and 1-3, or passes through step 3 deblurring The ROI of image afterwards;
Thick focus image is obtained by measurement standard of SMD2 sharpness evaluation function under the conditions of parallel described in step 1-3 The ROI of the upper highest 200*200 of scoring, i.e., fixed middle cross-graduation plate center region, concrete operations are as follows:
It first creates a structured data structure: storing the top left co-ordinate of ROI, the width of ROI region and height, then create Build a mapping list data structure: real value (value) is the structural body of creation, and key assignments (key) is ROI corresponding in structural body The expression formula of the numerical value of sharpness evaluation function SMD2, SMD2 is as follows:
SMD2=∑xy|f(x,y)-f(x+1,y)|×|f(x,y)-f(x,y+1)| (1)
F (x, y) is the gray value of image at pixel coordinate (x, y).
The width of subregion, height and search step pitch over an input image are set, according to setting value, in input picture On according to step pitch (50 pixel) parallel computation specified width, which width (200 pixel) and height (200 pixel) several subregions SMD2 Numerical value, deposit mapping list data structure.Mapping table is arranged from big to small according to key assignments, comes the correspondence sub-district of mapping table head of the queue Domain is the highest region (ROI) of clarity scoring, i.e. cross-graduation plate center region on image.
Because the image that step 1-2 is collected only is merely slightly focusing as a result, it is desirable to carry out clearly with adjacent image Degree value compares, and can obtain for the highest image of cross-graduation plate central area clarity, so the thread in step 1-3 Two first acquire 13 images upwards, still further below with the step pitch of 10um, in addition the thick focus image of step 1-2 acquisition, schemes for 27 totally Picture.
Cross-graduation plate center is extracted using image processing algorithm described in step 2, concrete operations are as follows:
2-1. uses image adaptive threshold binarization algorithm to ROI, obtains bianry image, specific as follows:
Determine that the single regions module size ksize of binaryzation is 17, the Gauss weighted value T of each pixel in computing module (x, y):
T (x, y)=α exp [- (i (x, y)-(ksize-1)/2)2/(2·θ)2] (2)
Wherein(x, y) be using single regions module center as the pixel coordinate value of origin, θ= 0.3 [(ksize-1) 0.5-1]+0.8, α meets ∑ T (x, y)=1;
The rule of binaryzation is shown below:
Dst (x, y) is target bianry image, and src (x, y) is former ROI image.
2-2. carries out the small Structuring-Element Morphological of 5*5 to bianry image since ROI image size only has 200*200 Etching operation processing removes small connected domain interference;
2-3. seeks maximum inscribed circle to connected domains all in ROI image, and seeks the maximum inscribed circle center of circle, specifically such as Under:
All connected domains in ROI image are first sought, maximum inscribed circle are asked to all connected domains, while saving the inscribed circle center of circle Coordinate and radius value are sorted from large to small obtained result by radius value, and maximum radius is in cross-graduation plate center Circle is connect, central coordinate of circle can be approximately cross-graduation plate centre coordinate, experiments verify that, inscribed circle central coordinate of circle and cross-graduation The Euclidean distance deviation of plate centre coordinate is in 4 pixels.
Selection image deblurring algorithm, concrete operations are as follows according to actual needs described in step 3:
3-1. judgement be currently which time enter deblurring algorithm, if it is the 1st time enter then jump directly to step 3-2 or Step 3-3 carries out deblurring;If it is greater than 1, illustrate that treated that ROI clarity threshold does not reach still through the 1st deblurring To priori clarity threshold, and for two kinds of deblurring algorithms, repetitive operation not can effectively improve clarity not only, can consume instead Take the unnecessary time, terminate program at this time, and exports the warning of " sample to be tested is more than centering system detectable limit ";
The step 3-2 is quick deblurring, and relative velocity is fast, and step 3-3 is relatively more accurate;
3-2. whole image highest to SMD2 uses trained " IRCNN deep learning model " deblurring, specifically such as Under:
In the case where focal length is the displacement object lens of 100mm, to 7 kinds of rotational symmetric aspheric optical elements by step 1-1 to step 1-3 acquires 27 images respectively, amounts to 7 × 27=189.For expand training sample set, to acquisition obtain every image into Brightness change in 5 restriction ranges of row, it is rotationally-varying in 6 restriction ranges, it is final obtain training sample size be 7 × 27 × (1+5 × 6)=5859.Then deep learning model is used -- image restoration convolutional neural networks (IRCNN) training 160000 Iteration cycle stores final model.
The defocus degradation model of image may be expressed as: that Y=HX+V, H are fuzzy operator, and V is additive noise, and X is ideal Focus image, Y are defocus degraded image.From Bayesian probability angle analysis, the estimation of XIt can be estimated by maximum a posteriori Count (MAP) problem solving:
Formula (4) first half is fidelity term, and latter half is regular terms, general with based on image degradation model (Y=HX + V) optimization method or differentiate that learning method (deep learning) solves, wherein the purpose for differentiating learning method is to pass through iteration Training obtains Study first Θ.IRCNN combines the advantage of two methods by half secondary split method (HQS), determines mesh Scalar functions are as follows:It is intended to one deblurring of training Device, model include seven convolutional layers, convolutional layer structure are as follows:
1st layer: the 3*3 that hole is 1 expands the linear activation primitive of convolution+amendment (ReLU);
2nd~6 layer: hole is respectively 2,3,4,3,2 3*3 expansion convolution+crowd standardization+ReLU;
7th layer: the 3*3 that hole is 1 expands convolution;
The wherein functional form of ReLU are as follows:Test result shows that the clarity of image after deblurring is commented Valence function SMD2 numerical value promotes nearly an order of magnitude, and processing speed is in the four thread Interi3- of double-core that dominant frequency is 3.9GHz Reach 20fps on 7100cpu, i.e., 50ms is only needed to individual ROI image deblurring.
3-3. whole image highest to SMD2 is using " being based on dark channel prior defogging algorithm " deblurring improved, specifically It is as follows:
The dark defogging algorithm of script is established on the basis of triple channel RGB image, is improved herein to it, is kept its right Equally there is more outstanding defog effect in gray level image.Former calculate is replaced using the lower quick Steerable filter algorithm of time complexity Steerable filter algorithm in method, shown in following (5)~(8) of the mathematic(al) representation of Steerable filter algorithm:
meanI=fmean(I), meanp=fmean(p), corrI=fmean(I.*I), corrIp=fmean(I.*p) (5)
I is guiding figure, and p is filtering input picture, fmeanIt is the mean filter of r (r takes 8 in program) for a windows radius Device, i.e. meanI、meanpRespectively I and p mean filter as a result, corrIAsk autocorrelative as a result, corr for IIpIt is asked for I and p The result of cross-correlation.
varI=corrI-meanI.*meanI, covIp=corrIp-meanI.*meanp (6)
varIFor the variance of I, covIpFor the covariance of I and p.
A=covIp./(varI+ δ), b=meanp-a.*meanI (7)
δ is regularization parameter.
meana=fmean(a), meanb=fmean(a), q=meana.*I+meanb (8)
Q is the output of Steerable filter as a result, f in Steerable filter algorithmmeanTime complexity be O (N), and need Calculate multiple fmean, therefore algorithm speed is slow, it is contemplated that the use of time complexity being O (N/s2) quick Steerable filter, Wherein s is the scaling of image, and 4, shown in following (9)~(13) of the mathematic(al) representation of quick Steerable filter are taken in program:
I'=fsubsample(I, s), p'=fsubsample(p, s), r'=r/s (9)
I' is guiding figure down-sampling (scaling) as a result, p' is filtering input picture down-sampling (scaling) as a result, r' is adopted under being The mean filter windows radius of r is equivalent to after sample.
meanI'=fmean(I',r'),meanp'=fmean(p',r'),corrI'=fmean(I'.*I',r'), corrI'p' =fmean(I'.*p',r') (10)
meanI'、meanp'Respectively I' and p' mean filter as a result, corrI'For I' ask it is autocorrelative as a result, corrI'p'The result of cross-correlation is sought for I' and p'.
varI'=corrI'-meanI'.*meanI', covI'p'=corrI'p'-meanI'.*meanp' (11)
varI'For the variance of I', covI'p'For the covariance of I' and p'.
A=covI'p'./(varI'+ δ), b=meanp'-a.*meanI' (12)
δ is regularization parameter.
Q' is the output of quick Steerable filter as a result, quickly the essence of Steerable filter first passes through down-sampling reduction operation Mean has been calculated to reduce time complexity in pixel numberaAnd meanbIt carries out up-sampling again afterwards and restores picture size.Using guiding Single image (image resolution ratio 3296*2472) average time-consuming 2.6s, quick Steerable filter time-consuming 1.0s is filtered.Mostly The SMD2 numerical value of the image ROI obtained in number situation " based on dark channel prior defogging algorithm " is compared with " IRCNN deep learning model " Height, but handle time-consuming and be higher by an order of magnitude, it is general only to be used in the case where defocus blur compares serious situation.
The present invention has the beneficial effect that:
Image zooming-out center method master during a kind of rotational symmetric aspheric optical element defocus blur proposed by the present invention is fixed Normal aberration when solving rotational symmetric aspheric optical element in surely due to aspherical optical element is greater than system in fixed System the depth of field and cause surely in cross-graduation plate image defocus blur and be difficult to the problem of extracting cross-graduation plate inconocenter.By making With based on " IRCNN deep learning deblurring model " or based on " improvement dark channel prior defogging algorithm ", high degree is improved Measurable range of the centering system for rotational symmetric aspheric optical element.Since algorithm widely applies multithreading and parallel skill Art makes full use of scrappy time slice, and is extracted ROI in advance and reduces picture size to be treated, only Operation processing, the very considerable efficiency improved in fixed need to be carried out to the ROI of 200*200.
Detailed description of the invention
Fig. 1 show the structure diagram of centering system.
Fig. 2A show the rotational symmetry ellipsoid (the CONIC coefficient of ellipsoid is -1) two that the vertex radius of a ball is 18.281mm Fixed middle cross-graduation plate figure of the secondary curved optical device in the case where focal length is the displacement object lens of 100mm, under the best focusing state of algorithm Picture.
Fig. 2 B show the displacement object lens that the spherical optics element that the vertex radius of a ball is 38.79mm is 100mm in focal length Under, the fixed middle cross-graduation plate image under the best focusing state of algorithm.
Fig. 3 show the relationship of rotational symmetry ellipsoid quadratic surface optical element the vertex radius of a ball and normal aberration;And In fixed permutation objective focal length, the depth of field of centering system and the relationship of normal aberration.
Fig. 4 show the fixed middle image zooming-out CENTER ALGORITHM flow chart of aspherical optical element defocus blur.
The rotational symmetry ellipsoid quadratic surface optical element that it is 8.8mm for the vertex radius of a ball that Fig. 5, which is shown, passes through step The SMD2 clarity appraisal curve of the ROI of collected 27 images of 1-2 and step 1-3.
Fig. 6 A show cavity and expands convolution kernel for 1 3*3.
Fig. 6 B show cavity and expands convolution kernel for 2 3*3.
Fig. 7 A show SMD2 sharpness evaluation function numerical value highest one of ROI in 27 images.
Fig. 7 B show the highest image of SMD2 sharpness evaluation function numerical value of ROI in Fig. 7 A through IRCNN deep learning The result of model deblurring.
Fig. 7 C show the improved dark defogging of the highest image of SMD2 sharpness evaluation function numerical value of ROI in Fig. 7 A The result of algorithm deblurring.
Fig. 8 A show the result of step 2-1 self-adaption thresholding.
Fig. 8 B show the result that step 2-3 extracts cross-graduation plate center.
Specific embodiment
Present invention will be further explained below with reference to the attached drawings and examples.
Embodiment 1
Fig. 1 show the structure diagram of centering system, and Y-axis (S7) is placed on marble platform (S8), replaces object lens (S3) And CCD (S1) He Dingzhong instrument (S2) is connected through a screw thread.X-axis (S6), Y-axis (S7) are first controlled when initialization will be symmetrically non- Immediately below aspherical elements (S4) operation to displacement object lens (S3), then Z axis (S10) is adjusted, makes to replace object lens (S3) and rotational symmetry The distance between aspherical optical element (S4) d is equal to the focal length f of displacement object lens (S3), and cross-graduation plate can be obtained at this time through revolving Turn the picture that the upper surface symmetric aspheres optical element (S4) is in, then according to the vertex of rotational symmetric aspheric optical element (S4) Radius of a ball r (convex surface is positive upward, concave surface is negative upward) is adjusted Z axis (S10), makes to replace object lens (S3) and rotational symmetry aspheric The distance between face optical element (S4) d is equal to f-r, and cross-graduation plate can be obtained and pass through the apex sphere centre of sphere (S5) is in ten Word graticle picture (S9).
Different from spherical optics element, there are normal aberrations in depth of field direction for aspherical optical element, when normal aberration is big When the system depth of field, fixed middle cross-graduation plate image can generate defocus blur on CCD imaging surface, and since energy dissipates, Cross-graduation plate brightness of image can reduce in fixed, and background gray scale can then increase, and extremely be unfavorable for cross-graduation plate image in surely Center extraction, as shown in Figure 2 A;And since normal aberration is not present in aspherical elements, it is theoretically always clearly to scheme in focusing Picture, as shown in Figure 2 B.
Fig. 3 show the relationship of rotational symmetry ellipsoid quadratic surface optical element the vertex radius of a ball and normal aberration;And In fixed permutation objective focal length, the depth of field of centering system and the relationship of normal aberration.From the figure 3, it may be seen that when using focal length For 50mm and 100mm displacement object lens when, for the non-defocus areas imaging ten of rotational symmetry ellipsoid quadratic surface optical element Point narrow, the corresponding non-defocus imaging limit vertex radius of a ball is respectively 1.9996mm and 5.9986mm, and defocus degree is with vertex The increase of the radius of a ball and it is more obvious, there are the accessible defocus blur limit of a conventional machines vision algorithm.It is used The focal length and system magnification for replacing object lens are inversely proportional, and focal length is smaller, and system magnification is bigger, are more conducive to high-precision fixed In.An object of the present invention is to increase this limiting value, keeps centering system wider for the measurable range of non-spherical element.
Fig. 4 show the fixed middle image zooming-out CENTER ALGORITHM flow chart of rotational symmetric aspheric optical element defocus blur.It is first Aspherical fixed middle unit is first initialized, according to the aspherical optical element upper surface to be measured vertex radius of a ball and fixed middle replacement unit object Mirror focal length, control X, Y-axis system and Z axis, makes the aspherical optical element upper surface apex sphere centre of sphere substantially focus, then acquires a width Image during thick focusing is fixed.After Image Acquisition, software can automatically turn on two threads, and a thread is directed to collected figure Picture extracts the ROI of the highest 200*200 of clarity in image, i.e., ten using SMD2 sharpness evaluation function as benchmark Word graticle center region, and the ROI coordinate information is stored, which uses software parallel, significant increase search efficiency; Due to the best focusing state of acquired image not necessarily correspondence system, thus thread two control Z-direction guide rail respectively upwards to Under 13 images are acquired respectively with step pitch 10um.Then under parallel environment, to new collected 26 images in same ROI into Row SMD2 clarity is evaluated, the maximum in 27 (image that 1 image slightly focused+26 acquired later) a clarity numerical value It is worth the best focus image that corresponding image is considered under qualifications.Fig. 5 show the rotation that the vertex radius of a ball is 8.8mm Symmetrical ellipsoid quadratic surface optical element, the SMD2 by the ROI of collected 27 images of step 1-2 and step 1-3 are clear Spend appraisal curve.
The maximum value of sharpness evaluation function and the priori clarity threshold 0.35 of setting are compared, if it is less than threshold Value then uses " IRCNN deep learning " or uses " based on dark channel prior defogging algorithm " progress deblurring processing of improvement, SMD2 numerical value is recalculated to the image after deblurring, the 3*3 expansion convolution kernel that it is 1,2 that Fig. 6 A, 6B, which are empty respectively, Fig. 7 A, The decimal shown on 7B, 7C is respectively the SMD2 numerical value of original image ROI, the ROI after IRCNN deep learning model deblurring The SMD2 numerical value of ROI after SMD2 numerical value, improved dark defogging algorithm deblurring, the SMD2 numeric ratio original image after deblurring mention An order of magnitude is risen;Four integers in the bracket of display are respectively columns where the upper left corner ROI, the upper left corner ROI place Line number, ROI width and ROI height.If the SMD2 clarity numerical value after deblurring is exported still less than clarity threshold The warning of " sample to be tested is more than centering system detectable limit ";If clear after the maximum value or deblurring of sharpness evaluation function The numerical value of clear degree evaluation function is more than or equal to threshold value, then carries out binary conversion treatment to the ROI region.Due to ROI image background and Prospect gray scale is uneven, and closer to the center of cross-graduation plate picture (prospect), image grayscale (including foreground and background) is higher, Further away from center, image grayscale is lower, this background gray scale for having resulted in center may be higher than the prospect gray scale on boundary, therefore needs Binarization threshold, i.e. adaptive threshold binaryzation are selected by different zones.Because ROI itself only has 200*200 size, So the morphological erosion using the small structural elements of 5*5 operates, small connected domain is removed.All connected domains are finally sought (to grasp through corrosion Small connected domain is likely present after work) maximum inscribed circle, wherein the maximum inscribed circle of radius is in cross-graduation plate center Circle is connect, being approximately considered the maximum inscribed circle center of circle is cross-graduation plate center, experiments verify that, the cross-graduation plate that software extracts The Euclidean distance deviation of centre coordinate and practical cross-graduation plate centre coordinate is no more than 4 pixels.Fig. 8 A, 8B are respectively step The result and step 2-3 of 2-1 self-adaption thresholding extract the result at cross-graduation plate center.
A large amount of optimizations are carried out to the management and utilization of time slice using multithreading and concurrent technique in algorithm, algorithm can be with Faster speed accurately extracts central point to the fixed middle cross-graduation plate image of defocus blur automatically.

Claims (8)

  1. The method that image automatically extracts center during 1. aspherical optical element defocus blur is fixed, it is characterised in that including walking as follows It is rapid:
    The region of interest ROI of the theoretical best focus image of step 1. centering system obtains, then according to the gray variance of ROI SMD2 sharpness evaluation function logarithm is judged, if the SMD2 clarity numerical value of the ROI of input picture is more than or equal to first Clarity threshold is tested, enters step 2;If the SMD2 clarity numerical value of the ROI of input picture is less than priori clarity threshold, Enter step 3;
    Step 2. extracts cross-graduation plate center using image processing algorithm, terminates;
    Step 3. selects image deblurring algorithm according to demand, then calculates the clear degree of SMD2 of image ROI after deblurring again Value, if exiting and exporting warning message, otherwise return step 2 still less than priori clarity threshold.
  2. The method that image automatically extracts center during 2. aspherical optical element defocus blur according to claim 1 is fixed, It is characterized in that step 1 concrete operations are as follows:
    1-1. adjusts X (S6), Y (S7) axis and rotational symmetric aspheric optical element (S4) is moved to initial position, according to displacement Object lens (S3) focal length f adjusts Z-direction guide rail (S10), so that rotational symmetric aspheric optical element upper surface is substantially focused, according to aspheric The vertex radius of a ball r of face element part adjusts the distance of Z axis movement r upwards (concave surface upward when) or downwards (convex surface upward when), even if Light focusing is in the member top surface apex sphere centre of sphere (S5);
    1-2. after step 1-1 adjustment, CCD is upper can present or obscure or clearly cross-graduation plate through on aspherical optical element The picture of surface vertices ball centre of sphere reflection;X, Y-axis system are finely tuned, is made as in visual field center, then acquiring what a width was slightly focused as far as possible Cross-graduation plate image;
    1-3. opens dual-thread, wherein a thread executes and obtains ROI: obtaining by measurement standard of SMD2 sharpness evaluation function Score the ROI of highest 200*200 on thick focus image, i.e., it is fixed in cross-graduation plate center region;Another thread: for Search cross-graduation plate is focused optimal position, controls Z-direction guide rail using thick focusing position as starting point, respectively upwards, downwards 1 width image is acquired every 10um, respectively acquires 13 width images up and down;
    1-4. calculates SMD2 clarity figure of merit to the ROI of image, compares the SMD2 numerical value of ROI, takes wherein maximum value and elder generation It tests clarity threshold to be compared, then enters step 2 if it is greater than priori clarity threshold, otherwise enter step 3.
  3. The method that image automatically extracts center during 3. aspherical optical element defocus blur according to claim 2 is fixed, It is characterized in that the ROI of the image of the step 1-4 is the ROI of collected 27 images of step 1-2 and 1-3, or by step The ROI of image after rapid 3 deblurring.
  4. The method that image automatically extracts center during 4. aspherical optical element defocus blur according to claim 2 or 3 is fixed, It is characterized in that obtaining thick focusing figure by measurement standard of SMD2 sharpness evaluation function under the conditions of parallel described in step 1-3 As the ROI of the upper highest 200*200 of scoring, that is, determine middle cross-graduation plate center region, concrete operations are as follows:
    It first creates a structured data structure: storing the top left co-ordinate of ROI, the width of ROI region and height, then create one A mapping list data structure: real value is the structural body of creation, and key assignments is ROI sharpness evaluation function corresponding in structural body The expression formula of the numerical value of SMD2, SMD2 is as follows:
    SMD2=∑xy|f(x,y)-f(x+1,y)|×|f(x,y)-f(x,y+1)| (1)
    F (x, y) is the gray value of image at pixel coordinate (x, y);
    The width of subregion, height and search step pitch over an input image is set to press over an input image according to setting value According to the SMD2 numerical value of several subregions of step pitch parallel computation specified width, which width and height, deposit mapping list data structure;It will mapping Table arranges from big to small according to key assignments, and the corresponding sub-region for coming mapping table head of the queue is the highest region of clarity scoring on image ROI, i.e. cross-graduation plate center region.
  5. The method that image automatically extracts center during 5. aspherical optical element defocus blur according to claim 4 is fixed, It is characterized in that extracting cross-graduation plate center using image processing algorithm described in step 2, concrete operations are as follows:
    2-1. uses image adaptive threshold binarization algorithm to ROI, obtains bianry image, specific as follows:
    Determine that the single regions module size ksize of binaryzation is 17, in computing module each pixel Gauss weighted value T (x, Y):
    T (x, y)=α exp [- (i (x, y)-(ksize-1)/2)2/(2·θ)2] (2)
    Wherein(x, y) is using single regions module center as the pixel coordinate value of origin, θ=0.3 [(ksize-1) 0.5-1]+0.8, α meets ∑ T (x, y)=1;
    The rule of binaryzation is shown below:
    Dst (x, y) is target bianry image, and src (x, y) is former ROI image;
    2-2. carries out the small structural elements morphological erosion of 5*5 to bianry image since ROI image size only has 200*200 Operation processing removes small connected domain interference;
    2-3. seeks maximum inscribed circle to connected domains all in ROI image, and seeks the maximum inscribed circle center of circle, specific as follows:
    All connected domains in ROI image are first sought, maximum inscribed circle are asked to all connected domains, while saving inscribed circle central coordinate of circle And radius value, obtained result is sorted from large to small by radius value, maximum radius is cross-graduation plate center inscribed circle, Central coordinate of circle can be approximately cross-graduation plate centre coordinate, experiments verify that, in inscribed circle central coordinate of circle and cross-graduation plate The Euclidean distance deviation of heart coordinate is in 4 pixels.
  6. The method that image automatically extracts center during 6. aspherical optical element defocus blur according to claim 5 is fixed, It is characterized in that selection image deblurring algorithm, concrete operations are as follows according to actual needs described in step 3:
    3-1. judgement is currently which time enters deblurring algorithm, enters if it is the 1st time and then directly carries out deblurring processing;Such as Fruit is greater than 1, then terminates, and exports the warning of " sample to be tested is more than centering system detectable limit ".
  7. The method that image automatically extracts center during 7. aspherical optical element defocus blur according to claim 6 is fixed, It is characterized in that Fuzzy Processing described in step 3-1, whole image highest to SMD2 uses trained " IRCNN deep learning Model " deblurring, specific as follows:
    In the case where focal length is the displacement object lens of 100mm, step 1-1 to step 1-3 is pressed to 7 kinds of rotational symmetric aspheric optical elements, 27 images are acquired respectively, amount to 7 × 27=189;To expand training sample set, 5 are carried out to every image that acquisition obtains Brightness change within the scope of secondary restriction, rotationally-varying in 6 restriction ranges, the final training sample size that obtains is 7 × 27 × (1+5 × 6)=5859;Then deep learning model is used -- image restoration convolutional neural networks 160000 iteration cycles of training are deposited Store up final model;
    The defocus degradation model of image may be expressed as: that Y=HX+V, H are fuzzy operator, and V is additive noise, and X is ideal focusing Image, Y are defocus degraded image;From Bayesian probability angle analysis, the estimation of XPass through MAP estimation problem solving:
    Formula (4) first half is fidelity term, and latter half is regular terms, with the optimization based on image degradation model Y=HX+V Method differentiates learning method to solve, wherein differentiating that the purpose of learning method is to obtain Study first Θ by repetitive exercise; IRCNN combines the advantage of two methods by half secondary split method, determines objective function are as follows: It is intended to one deblurring device of training, model includes seven convolutional layers, convolution Layer structure are as follows:
    1st layer: the 3*3 that hole is 1 expands the linear activation primitive of convolution+amendment (ReLU);
    2nd~6 layer: hole is respectively 2,3,4,3,2 3*3 expansion convolution+crowd standardization+ReLU;
    7th layer: the 3*3 that hole is 1 expands convolution;
    The wherein functional form of ReLU are as follows:Test result shows the clarity evaluation letter of image after deblurring Number SMD2 numerical value promotes nearly an order of magnitude, and processing speed is in the four thread Inter i3- of double-core that dominant frequency is 3.9GHz Reach 20fps on 7100cpu, i.e., 50ms is only needed to individual ROI image deblurring.
  8. The method that image automatically extracts center during 8. aspherical optical element defocus blur according to claim 6 is fixed, It is characterized in that Fuzzy Processing described in step 3-1, whole image highest to SMD2 " is gone using improvement based on dark channel prior Mist algorithm " deblurring, specific as follows:
    It is filtered using the lower quick Steerable filter algorithm replacement of time complexity based on the guiding in dark channel prior defogging algorithm Wave algorithm, shown in following (5)~(8) of the mathematic(al) representation of Steerable filter algorithm:
    meanI=fmean(I), meanp=fmean(p), corrI=fmean(I.*I), corrIp=fmean(I.*p) (5)
    I is guiding figure, and p is filtering input picture, fmeanThe mean filter for being r for a windows radius, i.e. meanI、meanp Respectively I and p mean filter as a result, corrIAsk autocorrelative as a result, corr for IIpThe result of cross-correlation is sought for I and p;
    varI=corrI-meanI.*meanI, covIp=corrIp-meanI.*meanp (6)
    varIFor the variance of I, covIpFor the covariance of I and p;
    A=covIp./(varI+ δ), b=meanp-a.*meanI (7)
    δ is regularization parameter;
    meana=fmean(a), meanb=fmean(a), q=meana.*I+meanb (8)
    Q is the output of Steerable filter as a result, f in Steerable filter algorithmmeanTime complexity be O (N), and need calculate it is more Secondary fmean, therefore algorithm speed is slow, it is contemplated that the use of time complexity being O (N/s2) quick Steerable filter, wherein s For the scaling of image, 4, shown in following (9)~(13) of the mathematic(al) representation of quick Steerable filter are taken in program:
    I'=fsubsample(I, s), p'=fsubsample(p, s), r'=r/s (9)
    I' is guiding figure down-sampling (scaling) as a result, p' is to filter input picture down-sampling as a result, r' is to be equivalent to r after down-sampling Mean filter windows radius;
    meanI'=fmean(I',r'),meanp'=fmean(p',r'),corrI'=fmean(I'.*I',r'),corrI'p'=fmean (I'.*p',r')(10)
    meanI'、meanp'Respectively I' and p' mean filter as a result, corrI'Ask autocorrelative as a result, corr for I'I'p'For I' The result of cross-correlation is sought with p';
    varI'=corrI'-meanI'.*meanI', covI'p'=corrI'p'-meanI'.*meanp' (11)
    varI'For the variance of I', covI'p'For the covariance of I' and p';
    A=covI'p'./(varI'+ δ), b=meanp'-a.*meanI' (12)
    δ is regularization parameter;
    Q' is the output of quick Steerable filter as a result, quickly the essence of Steerable filter is the pixel for first passing through down-sampling and reducing operation Mean has been calculated to reduce time complexity in numberaAnd meanbIt carries out up-sampling again afterwards and restores picture size.
CN201910481302.8A 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element Active CN110428463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910481302.8A CN110428463B (en) 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910481302.8A CN110428463B (en) 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element

Publications (2)

Publication Number Publication Date
CN110428463A true CN110428463A (en) 2019-11-08
CN110428463B CN110428463B (en) 2021-09-14

Family

ID=68408426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910481302.8A Active CN110428463B (en) 2019-06-04 2019-06-04 Method for automatically extracting center of image in out-of-focus fuzzy centering of aspheric optical element

Country Status (1)

Country Link
CN (1) CN110428463B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429422A (en) * 2020-03-19 2020-07-17 中国工程物理研究院激光聚变研究中心 Laser near-field state analysis method and device based on deep learning
CN112285876A (en) * 2020-11-04 2021-01-29 邱妙娜 Camera automatic focusing method based on image processing and bubble detection
CN114422660A (en) * 2021-12-06 2022-04-29 江苏航天大为科技股份有限公司 Imaging focusing system of designated monitoring area
CN114581857A (en) * 2022-05-06 2022-06-03 武汉兑鑫科技实业有限公司 Intelligent crown block control method based on image analysis
CN115393352A (en) * 2022-10-27 2022-11-25 浙江托普云农科技股份有限公司 Crop included angle measuring method based on image recognition and application thereof
CN116071657A (en) * 2023-03-07 2023-05-05 青岛旭华建设集团有限公司 Intelligent early warning system for building construction video monitoring big data
CN117970595A (en) * 2024-03-27 2024-05-03 笑纳科技(苏州)有限公司 Microscope automatic focusing method based on deep learning and image processing

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1468314A2 (en) * 2001-12-18 2004-10-20 University Of Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
TW200844869A (en) * 2007-05-04 2008-11-16 Univ Nat Central A fabrication method and the structure of prism lens capable of adjusting optical path
CN103776389A (en) * 2014-01-10 2014-05-07 浙江大学 High-precision aspheric combined interference detection device and high-precision aspheric combined interference detection method
CN105389820A (en) * 2015-11-18 2016-03-09 成都中昊英孚科技有限公司 Infrared image definition evaluating method based on cepstrum
US20160212335A1 (en) * 2015-01-21 2016-07-21 Siemens Energy, Inc. Method and apparatus for turbine internal visual inspection with foveated optical head and dual image display
CN107339955A (en) * 2017-01-07 2017-11-10 深圳市灿锐科技有限公司 A kind of inclined detecting instrument in high-precision lenses center and its measuring method
CN107667309A (en) * 2015-06-04 2018-02-06 高通股份有限公司 The method and apparatus that alignment is focused on for thin camera
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN109544462A (en) * 2018-09-28 2019-03-29 北京交通大学 License plate image deblurring method based on adaptively selected fuzzy core
CN109632264A (en) * 2018-12-29 2019-04-16 中国科学院西安光学精密机械研究所 A kind of detection device and method of photographic device environmental test stability
CN109669264A (en) * 2019-01-08 2019-04-23 哈尔滨理工大学 Self-adapting automatic focus method based on shade of gray value

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1468314A2 (en) * 2001-12-18 2004-10-20 University Of Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
TW200844869A (en) * 2007-05-04 2008-11-16 Univ Nat Central A fabrication method and the structure of prism lens capable of adjusting optical path
CN103776389A (en) * 2014-01-10 2014-05-07 浙江大学 High-precision aspheric combined interference detection device and high-precision aspheric combined interference detection method
US20160212335A1 (en) * 2015-01-21 2016-07-21 Siemens Energy, Inc. Method and apparatus for turbine internal visual inspection with foveated optical head and dual image display
CN107667309A (en) * 2015-06-04 2018-02-06 高通股份有限公司 The method and apparatus that alignment is focused on for thin camera
CN105389820A (en) * 2015-11-18 2016-03-09 成都中昊英孚科技有限公司 Infrared image definition evaluating method based on cepstrum
CN107339955A (en) * 2017-01-07 2017-11-10 深圳市灿锐科技有限公司 A kind of inclined detecting instrument in high-precision lenses center and its measuring method
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN109544462A (en) * 2018-09-28 2019-03-29 北京交通大学 License plate image deblurring method based on adaptively selected fuzzy core
CN109632264A (en) * 2018-12-29 2019-04-16 中国科学院西安光学精密机械研究所 A kind of detection device and method of photographic device environmental test stability
CN109669264A (en) * 2019-01-08 2019-04-23 哈尔滨理工大学 Self-adapting automatic focus method based on shade of gray value

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FANYI WANG: "Complicated intermittent scratches detection research on surface of optical components based on adaptive sector scanning algorithm cascading mean variance threshold algorithm", 《10TH INTERNATIONAL SYMPOSIUM ON PRECISION ENGINEERING MEASUREMENTS AND INSTRUMENTATION (ISPEMI 2018)》 *
KAI ZHANG ET AL.: "Learning Deep CNN Denoiser Prior for Image Restoration", 《ARXIV》 *
李艳 等: "基于显微视觉的逆反射标志中心定位方法研究", 《计测技术》 *
王浩然: "采用快速导向滤波的暗通道先验去雾算法的研究与实现", 《数字技术与应用》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429422A (en) * 2020-03-19 2020-07-17 中国工程物理研究院激光聚变研究中心 Laser near-field state analysis method and device based on deep learning
CN112285876A (en) * 2020-11-04 2021-01-29 邱妙娜 Camera automatic focusing method based on image processing and bubble detection
CN114422660A (en) * 2021-12-06 2022-04-29 江苏航天大为科技股份有限公司 Imaging focusing system of designated monitoring area
CN114581857A (en) * 2022-05-06 2022-06-03 武汉兑鑫科技实业有限公司 Intelligent crown block control method based on image analysis
CN114581857B (en) * 2022-05-06 2022-07-05 武汉兑鑫科技实业有限公司 Intelligent crown block control method based on image analysis
CN115393352A (en) * 2022-10-27 2022-11-25 浙江托普云农科技股份有限公司 Crop included angle measuring method based on image recognition and application thereof
CN116071657A (en) * 2023-03-07 2023-05-05 青岛旭华建设集团有限公司 Intelligent early warning system for building construction video monitoring big data
CN117970595A (en) * 2024-03-27 2024-05-03 笑纳科技(苏州)有限公司 Microscope automatic focusing method based on deep learning and image processing
CN117970595B (en) * 2024-03-27 2024-06-18 笑纳科技(苏州)有限公司 Microscope automatic focusing method based on deep learning and image processing

Also Published As

Publication number Publication date
CN110428463B (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN110428463A (en) The method that image automatically extracts center during aspherical optical element defocus blur is fixed
JP7075743B2 (en) Variable focal length lens with multi-level magnified depth of field image processing function
CN107255641B (en) A method of Machine Vision Detection is carried out for self-focusing lens surface defect
US7929044B2 (en) Autofocus searching method
CN104112269B (en) A kind of solar battery laser groove parameter detection method and system based on machine vision
CN111462075B (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy region
CN109635806A (en) Ammeter technique for partitioning based on residual error network
US20210312609A1 (en) Real-time traceability method of width of defect based on divide-and-conquer
DE102018114005A1 (en) Material testing of optical specimens
CN109361849B (en) Automatic focusing method
CN113923358A (en) Online automatic focusing method and system in flying shooting mode
CN107392849A (en) Target identification and localization method based on image subdivision
CN110736747A (en) cell liquid based smear under-mirror positioning method and system
CN114119526A (en) Steel plate surface defect detection and identification system and method based on machine vision
CN107590512A (en) The adaptive approach and system of parameter in a kind of template matches
CN109540917A (en) A kind of multi-angle mode yarn under working external appearance characteristic parameter extraction and analysis method
CN109341524A (en) A kind of optical fiber geometric parameter detection method based on machine vision
CN106340007A (en) Image processing-based automobile body paint film defect detection and identification method
CN111338051B (en) Automatic focusing method and system based on TFT liquid crystal panel
CN113689374A (en) Plant leaf surface roughness determination method and system
CN114764189A (en) Microscope system and method for evaluating image processing results
TWI754764B (en) Generating high resolution images from low resolution images for semiconductor applications
CN105913418B (en) A kind of Pupil Segmentation method based on multi-threshold
CN104966282A (en) Image acquiring method and system for detecting single erythrocyte
Wang et al. A machine vision method for correction of eccentric error based on adaptive enhancement algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant