WO2016041456A1 - 球面光学元件表面缺陷评价系统及其方法 - Google Patents

球面光学元件表面缺陷评价系统及其方法 Download PDF

Info

Publication number
WO2016041456A1
WO2016041456A1 PCT/CN2015/089217 CN2015089217W WO2016041456A1 WO 2016041456 A1 WO2016041456 A1 WO 2016041456A1 CN 2015089217 W CN2015089217 W CN 2015089217W WO 2016041456 A1 WO2016041456 A1 WO 2016041456A1
Authority
WO
WIPO (PCT)
Prior art keywords
spherical
image
defect
optical element
subaperture
Prior art date
Application number
PCT/CN2015/089217
Other languages
English (en)
French (fr)
Inventor
杨甬英
刘�东
李阳
柴惠婷
曹频
吴凡
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201410479580.7A external-priority patent/CN104215646B/zh
Priority claimed from CN201510536104.9A external-priority patent/CN105157617B/zh
Priority claimed from CN201510535230.2A external-priority patent/CN105092607B/zh
Application filed by 浙江大学 filed Critical 浙江大学
Priority to US15/509,159 priority Critical patent/US10444160B2/en
Publication of WO2016041456A1 publication Critical patent/WO2016041456A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • G01M11/0278Detecting defects of the object to be tested, e.g. scratches or dust
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/958Inspecting transparent materials or objects, e.g. windscreens
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • G01N2021/8822Dark field detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N2021/9511Optical elements other than lenses, e.g. mirrors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/958Inspecting transparent materials or objects, e.g. windscreens
    • G01N2021/9583Lenses

Definitions

  • the invention belongs to the technical field of machine vision detection, and in particular relates to a spherical optical element surface defect evaluation system and a method thereof.
  • Spherical optics are widely used in large-caliber space telescopes, inertial confinement fusion (ICF) systems, high-energy lasers, etc.
  • Defective features such as scratches and pitting on the surface of the component not only affect the imaging quality of the optical system, but also in high-energy lasers. Unwanted scattering and diffraction are also generated in the system, resulting in energy loss. In high-power laser systems, secondary damage may also occur due to excessive energy. Therefore, it is necessary to carry out the surface of the spherical optical element before use.
  • the detection of defects and the digital evaluation of defect information provide a reliable numerical basis for the use of spherical optical components.
  • the traditional detection method for surface defects of spherical optical components is mainly visual method.
  • the surface of the spherical surface is illuminated by strong light, and the human eye observes from different directions by using reflected light and transmitted light.
  • the visual method is greatly influenced by the proficiency of the examiner, subjective Strong sex, and long-term testing can cause eye fatigue, and can not give a quantitative description of defect information. Therefore, it is necessary to design a spherical optical element surface defect evaluation system and method, which can realize the automatic evaluation of the surface defect of the spherical optical element, and replace the manual visual method by the machine vision method, which greatly improves the detection efficiency and the detection precision.
  • the object of the present invention is to address the deficiencies of the prior art and to provide a spherical optical element surface defect evaluation system and method thereof for solving the automatic detection of surface defects of spherical optical elements.
  • the invention is based on the principle of microscopic scattering dark field imaging, and performs subaperture image scanning on the surface of the spherical optical element, and then obtains surface defect information by using an image processing method.
  • the invention fully utilizes the characteristic that the surface defect of the spherical optical element excites the scattered light when the ring illumination beam is irradiated onto the surface of the spherical element, and scans and images the subaperture image covering the full aperture of the measured spherical surface, and simultaneously utilizes the spherical subaperture image globally. Correction, 3D splicing, 2D projection, Digital feature extraction and the like to detect spherical defects. Using the defect calibration data, the size and position information of the defect can be quantitatively given.
  • the defect imaging subsystem comprises an illumination unit, a microscopic scattering dark field imaging unit, a spatial pose adjustment unit and a spherical centering unit;
  • the illumination unit is used for display
  • the micro-scattered dark field imaging unit image provides the required dark field illumination light;
  • the micro-scattered dark field imaging unit is used to collect and image the scattered light on the surface of the component;
  • the spatial pose adjustment unit is used to achieve five-dimensional spatial position and attitude adjustment It can not only realize three-dimensional translation of space, but also realize the rotation and swing of components, which is convenient for clear imaging at different positions on the surface;
  • the spherical centering unit is used to determine the position of the center of curvature of the convex
  • the illumination unit comprises a spherical light source and a light source rotating bracket, wherein the spherical light source comprises a uniform surface light source and a spherical light source mirror group, wherein the spherical light source mirror group is sequentially provided with a front fixed mirror group, a zoom lens group, a rear fixed mirror group, and a spherical light source mirror.
  • the angle between the optical axis of the group and the optical axis of the micro-scattered dark-field imaging unit is the incident angle ⁇ , and the angle of incidence ⁇ is in the range of 25°-45°.
  • the light source rotating bracket comprises a top fixing plate, an inner ring rotating shaft, a worm wheel, a worm, a servo motor, a motor bearing, a bearing, an outer ring rotating member and a light source fixing bracket; wherein the spherical light source is fixed on the light source fixing bracket, and the light source fixing bracket is fixed on the outer ring
  • the rotating part of the outer ring is movably connected to the rotating shaft of the inner ring through the bearing;
  • the worm wheel is mounted on the rotating part of the outer ring;
  • the worm wheel is movably connected with the worm and is rotated by the servo motor;
  • the servo motor passes through the motor support and the inner ring
  • the rotating shaft is fixed together on the top fixing plate, the top fixing plate is fixed on the Z-axis guide rail, and the light source rotating bracket is used for omnidirectional illumination of the spherical surface defect.
  • the three spherical light sources are uniformly distributed circumferentially on the outer ring rotating member at a 120° interval by the light source fixing bracket.
  • the light path of the illumination unit is formed as follows: firstly, the position of the zoom lens group in the spherical light source lens group is calculated according to the radius of curvature of the spherical optical element, and then the zoom lens group is moved to the calculated position; secondly, the parallel light from the uniform surface light source enters The spherical light source mirror group sequentially passes through the front fixed mirror group, the zoom lens group, and the rear fixed mirror group to form a concentrated beam of the aperture angle ⁇ l .
  • the micro-scattering dark field imaging unit realizes the microscopic dark field imaging of the defect by using the scattered light excited by the smooth surface defect to modulate the incident light to obtain a dark field image of the defect; the principle is as follows: the incident light is incident on the spherical surface On the surface of the optical component, when the spherical surface is smooth, according to the geometric optical reflection law, the incident light is reflected on the spherical surface, and the formed reflected light does not enter the microscopic scattering dark field imaging unit; when there is a surface defect on the spherical surface, the incident light will Scattering is induced, scattered light is generated and received by the micro-scattering dark field imaging unit to form a defective dark field image.
  • the space posture adjusting unit comprises an X-axis guide rail, a Y-axis guide rail, a Z-axis guide rail, a self-rotating table, an oscillating table and a self-centering clamping mechanism;
  • the oscillating table comprises an inner panel and an outer panel;
  • the self-centering clamping mechanism Fixedly connected with the rotating shaft on the rotating table, the base of the rotating table is fixed on the inner plate of the swinging table; the inner plate and the outer plate are movably connected, so that the inner plate can swing relative to the outer plate; the cross sections of the inner plate and the outer plate are both It is U-shaped;
  • the lower bottom surface of the outer plate of the oscillating table is fixed on the work surface of the Y-axis guide rail, and the Y-axis guide rail is fixed on the work surface of the X-axis guide rail;
  • the X-axis guide rail and the Z-axis guide rail are fixed on the same platform.
  • the spherical centering unit comprises a light source, a light source focusing mirror group, a reticle, a collimating lens, a beam splitter, an objective lens, a mirror, an imaging mirror and a CCD; the light emitted by the light source is illuminated by the focusing mirror group of the light source.
  • the reticle is engraved with a crosshair; then the light is transmitted through the collimator lens and then enters the beam splitter, which is transmitted through the beam splitter and then irradiated onto the spherical optical element through the objective lens and reflected on the surface.
  • the image formed by the crosshairs on the reticle is a reticle image; the reflected light passes through the objective lens and enters the beam splitter, and is reflected at the beam splitter; then the reflected light is reflected by the mirror and the imaging mirror is finally focused on On the CCD, the crosshairs on the reticle are imaged on the CCD.
  • the control subsystem comprises a centering control module, a lighting control module, a five-dimensional displacement control module and an image acquisition control module;
  • the centering control module comprises a centering image acquisition unit and a four-dimensional movement control unit;
  • the center image acquisition unit It is used to control the acquisition of the cross-hair image by the CCD in the spherical centering unit;
  • the four-dimensional shift control unit is used to control the movement of the X-axis guide, the Y-axis guide, the Z-axis guide and the rotation of the self-rotating table during the centering process;
  • the control module comprises a light source rotation control unit and a light source zoom control unit;
  • the light source rotation control unit controls the rotation of the light source rotation bracket in the illumination unit;
  • the light source zoom control unit drives the zoom lens group to move, and changes the aperture angle ⁇ l of the emitted concentrated beam;
  • the shift control module is used for controlling the movement of the X-axis guide, the Y-axis guide rail,
  • Spherical optical component surface defect evaluation system evaluation method including spherical automatic centering module, scan path planning module, image processing module and defect calibration module; spherical automatic centering module is used to complete the automatic centering of spherical optical components to achieve its curvature Accurate measurement of radius and consistency of optical axis of self-rotating axis and spherical optics; scan path planning module for planning optimal spherical scan path; image processing module for high-precision spherical surface defect detection; defect calibration The module is used to establish the relationship between the number of pixels of the subaperture image at any position of the spherical surface and the actual size, and obtain the actual size information of the defect, which specifically includes the following steps:
  • Step 1 Automatically centering the spherical surface through the spherical automatic centering module
  • Step 2 planning an optimal spherical scan path by using a scan path planning module to complete a spherical full aperture scan
  • Step 3 processing the subaperture image through the image processing module and the defect calibration module to obtain spherical defect information.
  • the automatic centering of the spherical surface by the spherical automatic centering module described in step 1 includes the following steps:
  • the position of the self-rotating shaft is measured by the rotation measurement method used in the optical adjustment, as follows:
  • a self-rotating table is mounted under the self-centering clamping mechanism to enable the convex spherical optical element to be self-rotating; and each time the rotating table is rotated by 30°, the CCD collects a cross-hair image, with the spin angle The difference is that the crosshairs have different positions on the CCD field of view.
  • the general trajectory is a circle, where the center of the circle is the position of the self-rotating axis;
  • Step 2 Plan the optimal spherical scan path by the scan path planning module to complete the full-spherical scan of the spherical surface, and specifically include the following steps:
  • the spherical optical element is moved directly below the microscopic scattering dark field imaging unit by using the spatial pose adjustment unit, and the microscopic scattering dark field imaging unit is used on the spherical surface.
  • the subaperture image is acquired at the apex, where the spherical coordinate system X s Y s Z s is defined, where the spherical coordinate system origin O s is the curvature center position of the spherical optical element, and the Z s axis passes the spherical vertex position;
  • a combined motion of the two-dimensional motion around the X s axis and the rotation around the Z s axis is required, and the trajectory of the latitude and longitude is used for the motion;
  • the spherical optical element acquires the subaperture image on the warp by swinging the angle ⁇ 1 around the X s axis; then, the ⁇ 1 angle is rotated around the Z s axis, and the subaperture image is acquired on the latitude line;
  • the subaperture image is acquired on the latitude line, thereby obtaining a plurality of subaperture images
  • the spherical optical element is again swung by a ⁇ 2 angle around the X s axis, and a sub-aperture image is acquired on the warp;
  • the subaperture image is acquired on the latitude to obtain multiple subaperture images, and the spherical optical element is again swung around the X s axis by ⁇ 2 angle.
  • Step 2 Plan the optimal spherical scan path by the scan path planning module to complete the spherical full aperture scan, which is characterized by first establishing a spherical subaperture planning model, in which the subaperture image A and the subaperture image B are respectively Two adjacent subaperture images acquired on the warp line C, the subaperture image Aa is the adjacent subaperture image collected on the latitude line D1 where the subaperture image A is located, and the subaperture image Bb is the latitude line where the subaperture image B is located.
  • the sufficient condition for the subaperture without leak detection is the arc length Less than or equal to Under this constraint, the correspondence between the swing angles ⁇ 1 , ⁇ 2 and the spin angles ⁇ 1 and ⁇ 2 is established, and the planned results are obtained, the swing angles ⁇ 1 , ⁇ 2 and the spin angle ⁇ . 1 , the solution of ⁇ 2 size is as follows:
  • the initial swing angle ⁇ 1 and the initial swing angle ⁇ 2 are given , and then the spin angles ⁇ 1 and ⁇ 2 are calculated according to the overlapping range of adjacent subaperture images on the weft line; then the arc length is calculated.
  • the sub-aperture image is processed by the image processing module and the defect calibration module in step 3 to obtain spherical defect information, which specifically includes the following steps:
  • the obtained imaging sub-aperture image is a two-dimensional image; since information compression along the imaging optical axis direction occurs during optical imaging, Therefore, the spherical three-dimensional reconstruction is first performed to correct the information compression along the imaging optical axis direction generated by the surface defect of the spherical optical element after optical imaging.
  • the spherical three-dimensional reconstruction process refers to the microscopic scattering dark field imaging unit. The imaging process is simplified into a small hole imaging model, and the imaging subaperture image is reconstructed into a three-dimensional subaperture image by using a geometric relationship;
  • Three-dimensional sub-aperture image is obtained after three-dimensional reconstruction of the spherical surface.
  • the information of the three-dimensional sub-aperture image is projected onto the two-dimensional plane by full-caliber projection to obtain a full-caliber projection image;
  • the defect evaluation results are outputted in the form of a spherical three-dimensional preview, an electronic report, and a defect location map.
  • the point p on the surface of the spherical optical element is driven by the spatial position adjusting unit, and moves to the point p' according to the optimal spherical scanning path planned by the scanning path planning module described in step 2;
  • the image coordinate system X c Y c is converted into the image coordinate system X i Y i to obtain the imaging sub-aperture image, and the X c- axis and the Y c- axis form the image coordinate system X.
  • c Y c whose coordinate origin is the intersection of the optical axis of the microscopic scattering dark field imaging unit and the imaging subaperture image; the X i axis and the Y i axis form the image coordinate system X i Y i , and the coordinate origin O i point is collected. Get the upper left corner of the digital image.
  • the acquiring process of the full-caliber projection image described in step 3-2 specifically includes the following steps:
  • the projected subaperture image is obtained by vertically projecting the spherical subaperture image onto the plane, and then the projection subaperture image is spliced, and the position and size information of the above defect is obtained on the plane, and then inversely reconstructed.
  • the accurate detection of the surface defects of the spherical optical element is realized;
  • the projection sub-aperture image is stitched directly by the weft layer, and the warp layer is stitched by the loop;
  • the projection sub-aperture image stitching process is as follows:
  • Step 3-3 performs low-power feature extraction on the obtained full-caliber projection image, and then uses the defect calibration module to obtain the spherical defect calibration data to realize the detection of the actual size of the defect; finally, the true defect is obtained through the three-dimensional inverse reconstruction. Dimensions, and obtain the position coordinates of the defect on the surface of the spherical optic, as follows:
  • the pixel number of the three-dimensional size and position coordinates of the defect is converted into the actual size and the position coordinate.
  • the spherical defect calibration data described in steps 3-3 and 3-4 includes defect length calibration data and defect width calibration data; the length calibration process is to obtain the actual length and spherical surface of the standard line segment at any position on the spherical surface.
  • the relationship between the number of pixels of the aperture image and the length calibration data are obtained as follows:
  • a standard line segment d l is taken on the plane object surface, and the length of d l is measured by a standard measuring instrument; the standard line segment d l is imaged by a microscopic scattering dark field imaging unit, and its image d p is obtained on the imaging subaperture image;
  • the image sub-aperture image is reconstructed into a three-dimensional sub-aperture image, and a spherical image d c of a standard line segment is obtained on the three-dimensional sub-aperture image.
  • the width calibration data is obtained as follows:
  • a standard line segment is taken on the tangent plane passing through the origin in the three-dimensional coordinate system, and the actual width is measured by a standard measuring instrument; the standard line segment is imaged by a microscopic scattering dark field imaging unit, and an image is obtained on the imaging subaperture image;
  • the image sub-aperture image is reconstructed into a three-dimensional sub-aperture image, and a spherical image of a standard line segment is obtained on the three-dimensional sub-aperture image, and the number of arc length pixels in the width direction is the defect width pixel number;
  • the information compression along the direction of the imaging optical axis is negligible, so the actual width of the defect is equal to the actual width of the standard line segment;
  • the discrete points of the corresponding relationship between the actual width of the defect and the number of pixels of the defect width are segmented and fitted to obtain the best fitting curve, which is the scaling transfer function; the scaling transfer function is used to calculate the arbitrary width pixel number of the spherical surface. Corresponding actual width.
  • the invention realizes the automatic quantitative detection of the surface defects of the spherical optical component, which not only frees the detector from the heavy visual inspection, but also greatly improves the detection efficiency and the detection precision, and avoids the influence of the subjective factors on the detection result. Finally, it provides a reliable numerical basis for the use and processing of spherical optics.
  • Fig. 1 is a block diagram showing the composition of a spherical optical element surface defect evaluation system and a method corresponding to the first embodiment and the second embodiment.
  • FIG. 2 is a schematic view showing portions of a spherical optical element surface defect evaluation system and a method thereof, corresponding to FIG. 1, in detail.
  • Fig. 3 is a structural view of a lighting unit corresponding to Fig. 1.
  • Fig. 4 is a view showing an illumination light path corresponding to the first embodiment of the first embodiment.
  • Figure 5 is a view showing the convex spherical optical element corresponding to Fig. 4 when the incident angle is 40°. The relationship between the radius of curvature and the aperture angle of the spherical source.
  • Figure 6 is a schematic diagram of microscopic scattering dark field imaging.
  • Fig. 7 is a view showing the configuration of a spherical centering unit corresponding to the first embodiment of the first embodiment.
  • Fig. 8A is a light path diagram showing a Z-direction deviation between the reticle image corresponding to Fig. 7 and the position of the center of curvature of the convex spherical optical element.
  • Fig. 8B is a schematic view showing a cross-hair image in the CCD field of view when the reticle image corresponding to Fig. 7 and the position of the center of curvature of the convex spherical optical element have a Z-direction deviation.
  • Fig. 9A is a view showing the optical path when the position of the center of curvature of the reticle image and the convex spherical optical element corresponding to Fig. 7 is shifted in the X and Y directions.
  • Fig. 9B is a schematic view showing a cross-hair image in the CCD field of view when the reticle image corresponding to Fig. 7 and the position of the center of curvature of the convex spherical optical element have X and Y directions.
  • Figure 10 is a block diagram showing the composition of the control subsystem corresponding to Figure 1.
  • Fig. 11A is a schematic view showing the correlation control in the spherical centering state corresponding to Fig. 10.
  • Fig. 11B is a schematic view showing the correlation control in the spherical defect detecting state corresponding to Fig. 10.
  • Figure 12 is a flow chart showing the spherical automatic centering module corresponding to Figure 1.
  • Fig. 13A is a graph showing an image entropy sharpness evaluation function corresponding to Fig. 12.
  • Fig. 13B is a schematic view showing the center of the circle of the trajectory of the crossed crosshair corresponding to Fig. 12.
  • Fig. 14 is a schematic view showing a subaperture image scanning process corresponding to Fig. 1.
  • FIG. 15 is a schematic diagram of a subaperture image planning model corresponding to FIG. 14.
  • Figure 16 is a flow chart showing the scan path planning module corresponding to Figure 14.
  • Figure 17 is a flow chart of the image processing module corresponding to Figure 1.
  • FIG. 18 is a schematic diagram showing a three-dimensional subaperture image imaging process corresponding to FIG.
  • FIG. 19 is a schematic diagram showing three-dimensional subaperture image reconstruction, spherical subaperture image splicing and full aperture projection corresponding to FIG. 17.
  • FIG. 20 is a schematic diagram showing the inverse reconstruction of the projected subaperture image corresponding to FIG. 17.
  • Fig. 21 is a flow chart showing the full aperture projection corresponding to Fig. 17.
  • FIG. 22 is a flow chart showing the full-caliber projection splicing process corresponding to FIG. 21.
  • Figure 23 is a schematic illustration of a low-scale calibration process for spherical defect lengths.
  • Figure 24 is a schematic illustration of a high-scale calibration process for spherical defect width.
  • Figure 25 is a width scaling transfer function curve corresponding to Figure 24.
  • Fig. 26 is a view showing an illumination light path corresponding to the second embodiment of the second embodiment.
  • Figure 27 is a graph showing the relationship between the radius of curvature of the concave spherical optical element and the aperture angle of the spherical light source when the incident angle is 40° corresponding to Figure 26.
  • Fig. 28 is a view showing the configuration of a spherical centering unit corresponding to the second embodiment of the second embodiment.
  • Fig. 29A is a view showing the optical path when the reticle image corresponding to Fig. 28 and the position of the center of curvature of the concave spherical optical element are shifted in the Z direction.
  • Fig. 29B is a schematic view showing a cross-hair image in the CCD field of view when the reticle image corresponding to Fig. 28 has a Z-direction deviation from the position of the center of curvature of the concave spherical optical element.
  • Fig. 30A is a view showing the optical path when the position of the reticle image corresponding to Fig. 28 and the position of the center of curvature of the concave spherical optical element are shifted in the X and Y directions.
  • Fig. 30B is a schematic view showing a cross-hair image in the CCD field of view when the reticle image corresponding to Fig. 28 and the position of the center of curvature of the concave spherical optical element have X and Y directions.
  • Fig. 31 is a block diagram showing the composition of a spherical optical element surface defect evaluation system and a method corresponding to the third embodiment of the third embodiment.
  • Fig. 32 is a schematic view showing the respective portions of the spherical optical element surface defect evaluation system and the method thereof corresponding to Fig. 31 in detail.
  • Figure 33 is a flow chart showing the image processing module corresponding to Figure 31.
  • Embodiment 1 is applicable to the case of evaluating a surface defect of a convex spherical optical element by using a spherical optical element surface defect evaluation system and a method thereof
  • Embodiment 2 of Embodiment 2 is applicable to evaluation of a concave surface evaluation system using a spherical optical element and a method thereof The case of surface defects of spherical optics.
  • Embodiment 3 is applicable to the case of evaluating a surface defect of a small-diameter spherical optical element by using a spherical optical element surface defect evaluation system and a method thereof.
  • the small-caliber spherical optics only need to acquire a sub-aperture image to obtain full-caliber dark-field image information, which will make the evaluation method more simplified.
  • Embodiment 1 of Embodiment 1 a spherical optical element surface defect evaluation system and a method thereof for evaluating a convex spherical optical element will be described.
  • Fig. 1 is a block diagram showing the composition of a spherical optical element surface defect evaluation system and a method corresponding to the first embodiment and the second embodiment.
  • defect assessment system 100 includes defect imaging subsystem 200 and control subsystem 700.
  • the defect imaging subsystem 200 is used to acquire a microscopically scattered dark field image suitable for digital image processing.
  • the control subsystem 700 is used to control the motion of the illumination unit 300, the micro-scattering dark field imaging unit 400, the spatial pose adjustment unit 500, and the spherical centering unit 600 to achieve acquisition of a surface image of the convex spherical optical element.
  • the defect imaging subsystem 200 includes a lighting unit 300, a micro-scattering dark field imaging unit 400, a spatial pose adjustment unit 500, and a spherical centering unit 600.
  • the illumination unit 300 is used to provide the desired dark field illumination light for imaging the micro-scattered dark field imaging unit 400.
  • the micro-scattering dark field imaging unit 400 is for collecting scattered light from the surface of the element and imaging.
  • the spatial pose adjustment unit 500 is used to realize five-dimensional spatial position and posture adjustment, which can not only realize spatial three-dimensional translation, but also realize rotation and swing of components, and facilitate clear imaging at different positions on the surface.
  • the spherical centering unit 600 is used to determine the position of the center of curvature of the convex spherical optical element.
  • the movement and adjustment of the illumination unit 300, the micro-scattering dark field imaging unit 400, the spatial pose adjustment unit 500, and the spherical centering unit 600 are all performed under the drive control of the control subsystem 700.
  • the illumination unit 300 is used to provide dark field illumination to the micro-scattering dark field imaging unit 400. If the spherical optical element is illuminated by a common parallel light source, the incident light that has not passed through the spherical curvature center is reflected by the spherical surface and then passes through the microscopic scattering dark field imaging unit 400 to form a bright field reflection spot, which is destroyed by the spherical curvature radius. Dark field illumination, therefore, the system has developed an illumination unit 300 suitable for surface defect detection of spherical optical elements, which produces illumination light having different aperture angles for convex spherical optical elements of different curvature radii, and provides dark field illumination for convex spherical optical elements.
  • FIG. 3 is a structural view of the illumination unit 300 corresponding to FIG. 1.
  • Lighting unit 300 The spherical light source includes a uniform surface light source 320 and a spherical light source mirror group 330.
  • the front light fixed mirror group 331, the zoom lens group 332, and the rear fixed mirror group 333 are sequentially mounted in the spherical light source lens group 330.
  • the angle between the optical axis of the spherical light source mirror set 330 and the optical axis 405 of the microscopic scattering dark field imaging unit is the incident angle ⁇ , and the angle of incidence angle ⁇ ranges from 25° to 45°;
  • the light source rotating bracket 310 shown in FIG. 3 includes a top fixing plate 311, an inner ring rotating shaft 312, a worm wheel 313, a worm 314, a servo motor 315, a motor support 316, a bearing 317, an outer ring rotating member 318, and a light source fixing bracket 319.
  • the spherical light source is fixed on the light source fixing bracket 319, the light source fixing bracket 319 is fixed on the outer ring rotating member 318; the outer ring rotating member 318 is movably connected to the inner ring rotating shaft 312 through the bearing 317; the outer ring rotating member 318 is mounted with the worm wheel 313;
  • the worm wheel 313 is movably coupled to the worm 314 and is circumferentially rotated by the servo motor 315; the servo motor 315 is fixed to the top fixing plate 311 by the motor support 316 together with the inner ring rotating shaft 312, and the top fixing plate 311 is fixed to the Z-axis guide 530. on.
  • the light source rotating bracket 310 completes omnidirectional illumination of the spherical surface defect.
  • the three spherical light sources 301a, 301b, and 301c are evenly distributed on the outer ring rotating member 318 at a 120° interval by the light source fixing bracket 319, and the light source ring illuminating unit 721 drives the servo motor 315 to realize the light source ring illumination.
  • Fig. 4 is a view showing an illumination light path corresponding to the first embodiment of the first embodiment.
  • the uniform surface light source 320 emits parallel light, and the parallel light passes through the spherical light source mirror group 330 to form a concentrated beam of the aperture angle ⁇ l .
  • the specific process is as follows: First, the zoom lens group 332 is calculated according to the radius of curvature of the convex spherical optical element 201. The position in the group 330 moves the zoom lens group 332 to the calculated position; secondly, the parallel light emitted by the uniform surface light source 320 enters the spherical light source lens group 330, and sequentially passes through the front fixed mirror group 331, the zoom lens group 332, and the rear fixed mirror. After group 333, a converging beam of aperture angle ⁇ l is formed.
  • Fig. 5 is a view showing the relationship between the radius of curvature of the convex spherical optical element and the aperture angle ⁇ l of the spherical light source when the incident angle ⁇ is 40° corresponding to Fig. 4 . It can be seen that as the radius of curvature increases, the aperture angle ⁇ l of the spherical source decreases, and the illumination range of the light source received on the surface thereof is correspondingly reduced, and the aperture angle ⁇ l is less than or equal to 15°.
  • the micro-scattering dark-field imaging unit 400 realizes the microscopic dark field imaging of the defect by using the scattered light excited by the smooth surface defect to modulate the incident light, and obtains the darkness of the defect. Field image.
  • the micro-scattering dark field imaging unit 400 is a machine vision module of the defect evaluation system 100.
  • Figure 6 is a schematic diagram of microscopic scattering dark field imaging.
  • the incident ray 210 is incident on the surface of the convex spherical optical element 201.
  • the incident ray 210 is reflected on the surface according to the geometric optical reflection law to form the reflected ray 212.
  • the reflected ray 212 does not enter the micro-scattering dark field imaging unit 400.
  • a surface defect 203 is present on the spherical surface, the incident ray 210 causes scattering, and the scattered ray 211 is generated and received by the micro-scattering dark field imaging unit 400 to form a defective dark field image.
  • the spatial pose adjustment unit 500 performs any spatial pose adjustment of the convex spherical optical element 201.
  • the spatial pose adjustment unit 500 includes an X-axis guide rail 510, a Y-axis guide rail 520, a Z-axis guide rail 530, a spin-rotation stage 540, an oscillating table 550, and a self-centering clamping mechanism 560.
  • the oscillating table 550 includes an inner panel and an outer panel.
  • the self-centering clamping mechanism 560 is fixedly coupled to the rotating shaft on the rotating table 540, and the base of the rotating table 540 is fixed to the inner plate of the swinging table 550; the inner plate and the outer plate are movably connected, so that the inner plate can be opposite to the outer plate Swinging; the inner and outer plates are U-shaped; the lower surface of the outer surface of the oscillating table 550 is fixed on the working surface of the Y-axis guide 520, and the Y-axis guide 520 is fixed on the working surface of the X-axis guide 510;
  • the guide rail 510 and the Z-axis guide 530 are fixed on the same platform; the illumination unit 300, the micro-scattering dark field imaging unit 400, and the spherical centering unit 600 are all fixed on the Z-axis guide 530.
  • the spherical centering unit 600 provides a hardware basis for completing the centering of the convex spherical optical element 201.
  • Fig. 7 is a view showing the configuration of a spherical centering unit 600 corresponding to the first embodiment of the first embodiment.
  • the light emitted by the light source 601 in the spherical centering unit 600 is irradiated onto the reticle 603 via the light source focusing mirror group 602, and the reticle 603 is engraved with a crosshair. Then, the light is transmitted through the collimator lens 604 and then enters the beam splitter 605.
  • the beam splitter 605 After being transmitted through the beam splitter 605, it is irradiated onto the convex spherical optical element 201 through the objective lens 606, and is reflected on the surface thereof. At this time, the cross on the reticle 603 The image formed by the cross wire is a reticle image 610.
  • the reflected light passes through the objective lens 606 and enters the beam splitter 605, and is reflected at the beam splitter 605; then the reflected light is reflected by the mirror 607 and the imaging mirror 608 is finally focused on the CCD 609, and the cross on the reticle 603 is crossed.
  • the wire is imaged on CCD 609.
  • the reflected light is symmetrical with the incident light about the optical axis 615 of the spherical centering unit, so that the reflected light again becomes parallel light when passing through the objective lens 606, and finally a clear cross-hair image is formed on the CCD 609.
  • the clear crosshairs are called surface images. The position of the surface image in the field of view of the CCD 609 does not change with the slight movement of the convex spherical optical element 201 in the X and Y directions.
  • the reticle image 610 is located at the curvature of the convex spherical optical element.
  • the reflected light coincides with the incident light, so that a clear cross-hair image can also be obtained on the CCD 609, and the clear cross-hair image is called a spherical image.
  • the CCD 609 can acquire two clear cross-hair images, which are a surface image and a spherical image, respectively.
  • the crosshair picture can be obtained on the CCD 609, so the position of the center of curvature of the convex spherical optical element can be judged by the position and sharpness of the crosshair in the picture.
  • the judgment process is as follows:
  • Fig. 8A is a light path diagram showing a Z-direction deviation between the reticle image 610a corresponding to Fig. 7 and the position of the center of curvature 202 of the convex spherical optical element.
  • the incident light passing through the spherical image does not overlap with the reflected light, so that a blurred crosshair image is obtained on the CCD 609 as shown in FIG. 8B.
  • FIG. 9A is a light path diagram when the reticle image 610b corresponding to FIG. 7 and the position of the center of curvature 202 of the convex spherical optical element have X and Y directions.
  • the optical axis 205 of the convex spherical optical element does not coincide with the optical axis 615 of the spherical centering unit, and the reflected light is focused on the CCD 609 through the imaging mirror to form a cross-hair image with clear focus but not in the center of the field of view, as shown in FIG. 9B.
  • the relative position of the center of curvature 202 of the convex spherical optical element in the three-dimensional space can be determined by the above analysis using the different states of the cross-hair image on the CCD 609.
  • the control subsystem 700 is used to complete the automatic control of each unit in the defect imaging subsystem 200, and realize automatic detection of surface defects of the spherical optical element.
  • FIG. 10 is a block diagram showing the composition of the control subsystem 700 corresponding to FIG. 1.
  • the control subsystem 700 includes a centering control module 710, a lighting control module 720, a five-dimensional shifting control module 730, and an image acquisition control module 740.
  • the centering control module 710 includes a centering image acquisition unit 711 and a four-dimensional motion steering control unit 712.
  • the centering image acquisition unit 711 is used to control the spherical centering list
  • the CCD 609 of the element 600 completes the acquisition of the cross-hair image;
  • the four-dimensional displacement control unit 712 controls the movement of the X-axis guide 510, the Y-axis guide 520, the Z-axis guide 530, and the rotation of the self-rotating table 540 during the centering process.
  • the illumination control module 720 includes a light source rotation control unit 721 and a light source zoom control unit 722.
  • the light source rotation control unit 721 controls the rotation of the light source rotation bracket 310 in the illumination unit 300;
  • the light source zoom control unit 722 drives the zoom lens group 332 to move, changing the aperture angle ⁇ l of the emitted concentrated light beam.
  • the five-dimensional shift control module 730 is used to control the movement of the X-axis guide 510, the Y-axis guide 520, the Z-axis guide 530, the rotation of the spin table 540, and the swing of the swing table 550 during the defect detection process.
  • the image acquisition control module 740 includes a subaperture image acquisition unit 741 and a microscope magnification control unit 742.
  • the subaperture image acquisition unit 741 is for controlling the acquisition of the subaperture image by the microscopic scattering dark field imaging unit 400;
  • the microscope magnification control unit 742 is for changing the imaging magnification of the microscopic scattering dark field imaging unit 400.
  • the operational state of the defect evaluation system 100 includes a spherical centering state and a spherical defect detecting state.
  • Fig. 11A is a schematic view showing the correlation control in the spherical centering state corresponding to Fig. 10.
  • the convex spherical optical element 201 is moved directly below the spherical centering unit 600 by the spatial pose adjustment unit 500 to enter a spherical centering state.
  • the control subsystem 700 performs the spherical automatic centering by the centering image acquisition unit 711 and the four-dimensional movement control unit 712.
  • the four-dimensional displacement control unit 712 controls the movement of the Z-axis guide 530 to drive the spherical centering unit 600 to move in the Z direction to realize its automatic precise focusing, and controls the movement of the X-axis guide rail and the Y-axis guide rail to drive the translation and control of the convex spherical optical element 201. Rotation from the rotary table 540.
  • Fig. 11B is a schematic view showing the correlation control in the spherical defect detecting state corresponding to Fig. 10.
  • the control subsystem 700 performs full aperture defect detection of the convex spherical optical element 201 through the illumination control module 720, the five-dimensional shift control module 730, and the image acquisition control module 740.
  • the illumination control module 720 includes a light source rotation control unit 721 and a light source zoom control unit 722, wherein the light source rotation control unit 721 realizes a surface lack of the convex spherical optical element 201
  • the omnidirectional illumination of the trap, the light source zoom control unit 722 achieves dark field illumination of surface defects of the convex spherical optical element 201.
  • the five-dimensional displacement control module 730 drives the convex spherical optical element 201 to perform precise positioning of the spatial pose, and performs full-diameter scanning on the surface defects of the convex spherical optical element 201.
  • the subaperture image acquisition unit 740 includes a subaperture image acquisition unit 741 and a microscope magnification control unit 742, wherein the subaperture image acquisition unit 741 performs acquisition of the subaperture image for the subsequent image processing module 1100; the microscope magnification control unit 742 completes The microscopic scattering dark field imaging unit 400 automatically controls the imaging magnification.
  • the control subsystem 700 is a hub for the connection defect imaging subsystem 200 and the defect evaluation method 800 in the defect evaluation system 100.
  • the control subsystem 700 achieves precise control of the defect imaging subsystem 200.
  • the image data, position and status information obtained by the defect imaging subsystem 200 also needs to be passed to the defect evaluation algorithm 800 by the control subsystem 700 for processing.
  • the control subsystem 700 enables rapid transfer of information between the defect imaging subsystem 200 and the defect evaluation method 800 and efficient co-processing, completes automated scanning of the convex spherical optical element 201 and improves detection efficiency of the overall system.
  • the implementation of the defect evaluation method 800 includes a spherical automatic centering module 900, a scan path planning module 1000, an image processing module 1100, and a defect calibration module 1400.
  • the spherical automatic centering module 900 is used to complete the automatic centering of the convex spherical optical element 201, achieving accurate measurement of its radius of curvature and consistency adjustment of the self-rotating shaft 565 and the optical axis 205 of the convex spherical optical element.
  • the scan path planning module 1000 is used to plan an optimal spherical scan path, so as to cover the entire surface of the component with as few sub-aperture image numbers as possible, and to ensure that no-leak detection is implemented on the basis of reducing component motion.
  • the image processing module 1100 is used to achieve high precision spherical surface defect detection.
  • the defect calibration module 1400 is configured to establish a relationship between the number of pixels of the subaperture image at an arbitrary position on the spherical surface and the actual size, and obtain actual size information of the defect.
  • Step 1 Automatically centering the spherical surface by the spherical automatic centering module 900;
  • Step 2 The optimal path scan path is planned by the scan path planning module 1000 to complete the spherical full aperture scan;
  • Step 3 The subaperture image is processed by the image processing module 1100 and the defect calibration module 1400 to obtain spherical defect information.
  • the spherical surface is automatically centered by the spherical automatic centering module 900 as described in step 1, including accurate measurement of the radius of curvature of the convex spherical optical element 201 and adjustment of the shafting consistency, wherein the shafting consistency is adjusted to adjust the convex spherical optical element.
  • the optical axis 205 coincides with the self-rotating axis 565 to provide a planned reference position for planning the optimal spherical scan path in step 2. Referring to FIG. 12, the following steps are specifically included:
  • the convex spherical optical element 201 is moved to an initial position, which is a position where the optical axis 205 of the convex spherical optical element and the optical axis 615 of the spherical centering unit are largely coincident.
  • Controlling the Z-axis guide 530 to scan in the Z direction, and using the image entropy sharpness evaluation function to find the clearest cross-hair image during the scanning process, as shown in FIG. 13A is an image entropy sharpness evaluation function. curve.
  • the moving X-axis guide 510 and the Y-axis guide 520 image the crosshairs to the center of the field of view such that the optical axis 205 of the convex spherical optical element coincides with the optical axis 615 of the spherical centering unit.
  • the position of the self-rotating shaft 565 is measured by the rotation measurement method used in the optical adjustment, as follows:
  • a self-rotating stage 540 is attached under the self-centering clamping mechanism 560 to enable the convex spherical optical element 201 to be self-rotating. And each time after the rotation of the rotary table 540 is rotated by 30°, the CCD 609 collects a cross-hair image. The position of the cross-hair image on the field of view of the CCD 609 is different according to the different rotation angles, and the general trajectory is a circle. As shown in FIG. 13B, the center 910 is the position where the self-rotating shaft 565 is located.
  • the center of the cross-hair image is obtained by fitting the center of the cross-hair image by the least-squares best circle fitting method to obtain the motion trajectory, thereby obtaining the center of the motion trajectory. Calculate the distance from the center of each crosshair image to the center of the circle to complete the calculation of the maximum deviation of the crosshair image.
  • Steps 1-9 Move the Z-axis guide 530 to the position of the theoretical curvature center obtained by initialization, then control the Z-axis guide 530 to scan in the Z direction, and find the clearest cross-hair image during the scanning process, and then jump Steps 1-5; simultaneously recording the distance of the Z axis from the surface image to the spherical image, thereby obtaining the radius of curvature of the convex spherical optical element 201 (i.e., the distance moved by the Z axis).
  • the self-centering clamping mechanism 560 is adjusted to move the center of the crosshair image to the center of the track circle, at which point the optical axis 205 of the convex spherical optical element is adjusted to coincide with the spin axis 565.
  • Moving the X-axis rail 510 and the Y-axis rail 520 moves the crosshair image to the center of the field of view of the CCD 609, at which point the optical axis 205 of the convex spherical optical element is adjusted to coincide with the optical axis 615 of the spherical centering unit.
  • the optical axis 205 of the convex spherical optical element and the self-rotating axis 565 coincide with the optical axis 615 of the spherical centering unit.
  • the position of the convex spherical optical element 201 is the reference position of the scanning path planning.
  • the optimal spherical scan path is planned by the scan path planning module 1000 in step 2 to complete the spherical full aperture scan.
  • the method includes the following steps:
  • a spherical coordinate system X s Y s Z s is defined, wherein the spherical coordinate system origin O s point 1004s is the curvature center position of the convex spherical optical element 201, and the Z s axis 1003s passes the spherical vertex position 1009.
  • a combined motion of a two-dimensional motion of 1001 s around the X s axis and 1003 s of the Z s axis is required, and the trajectory of the latitude and longitude is used for the motion.
  • the convex spherical optical element 201 is obtained by oscillating the ⁇ 1 angle 1007a around the X s axis 1001s, and the subaperture image 1020 is acquired on the warp 1005 as shown in FIG. 14B; then the ⁇ 1 angle is rotated around the Z s axis 1003s. 1008a, a subaperture image 1020a is acquired on the weft 1006a as shown in FIG. 14C.
  • the subaperture image is acquired on the weft 1006a, thereby obtaining a plurality of subaperture images, as shown in FIG. 14D.
  • the convex spherical optical element 201 is again swung by the ⁇ 2 angle 1007b around the X s axis 1001s, and the subaperture image 1030 is acquired on the warp 1005.
  • the subaperture image is acquired on the weft 1006b, as shown in Fig. 14F, the spherical optical element is again swung around the X s axis by ⁇ 2 angle. Multiple sub-aperture images on the next weft layer are acquired, and so on, the sampling process covering the entire surface is completed.
  • a spherical subaperture planning model is first established, as shown in FIG.
  • the subaperture image 1020 and the subaperture image 1030 are respectively two adjacent subaperture images acquired on the warp 1005, and the subaperture image 1020a is the adjacent sub-collected on the weft 1006a where the subaperture image 1020 is located.
  • the aperture image, and the subaperture image 1030a is an adjacent subaperture image acquired on the latitude 1006b where the subaperture image 1030 is located.
  • the intersection of the sub-aperture images 1020 and 1020a at the bottom is P cd 1040a
  • the intersection of the sub-aperture images 1030 and 1030a at the top is P cu 1040b.
  • the sufficient condition for realizing the sub-aperture without leak detection is the arc length 1045b is less than or equal to 1045a, under this constraint condition, establish a correspondence between the swing angles ⁇ 1 1007a, ⁇ 2 1007b and the spin angles ⁇ 1 1008a, ⁇ 2 1008b, and solve the obtained planning result, as shown in FIG.
  • the solution of the swing angles ⁇ 1 1007a, ⁇ 2 1007b and the spin angles ⁇ 1 1008a, ⁇ 2 1008b is as follows:
  • the initial swing angle ⁇ 1 1007a and the initial swing angle ⁇ 2 1007b are given according to the above three parameters, and then the spin angles ⁇ 1 1008a and ⁇ 2 1008b are calculated in accordance with the overlapping range of adjacent subaperture images on the weft line. Then calculate the arc length 1045b and 1045a.
  • the sub-aperture image is processed by the image processing module 1100 and the defect calibration module 1400 to obtain spherical defect information.
  • the method includes the following steps:
  • the obtained imaging sub-aperture image is a two-dimensional image. Since information compression along the imaging optical axis direction occurs during optical imaging, spherical three-dimensional reconstruction is first performed to correct information compression in the imaging optical axis direction generated when the surface defects of the convex spherical optical element 201 are optically imaged.
  • Three-dimensional sub-aperture image is obtained after three-dimensional reconstruction of the spherical surface.
  • the information of the three-dimensional sub-aperture image is projected onto the two-dimensional plane by full-caliber projection to obtain a full-caliber projection image.
  • the defect calibration module 1400 performs low-frequency feature extraction on the obtained full-caliber projection image, and then use three-dimensional inverse reconstruction to obtain the three-dimensional size of the defect. Finally, the spherical defect calibration data obtained by the defect calibration module 1400 is used to realize the detection of the actual size of the defect. And obtaining the positional coordinates of the defect on the surface of the convex spherical optical element 201.
  • the imaging magnification of the micro-scattering dark field imaging unit 400 is adjusted to a high magnification; then, according to the position coordinates obtained in step 3-3, the surface defect is moved to the center of the high-magnification field to perform high-power image acquisition;
  • the spherical defect calibration data obtained by the defect calibration module 1400 obtains a defect evaluation result of the order of micrometers.
  • the defect evaluation results are outputted in the form of a spherical three-dimensional preview, an electronic report, and a defect location map.
  • the process of obtaining the imaging subaperture image by the convex spherical optical element 201 described in step 3-1 by imaging the image field on the image surface by the microscopic scattering dark field imaging unit 400 is as follows. Referring specifically to FIG. 18:
  • a point p 1201 on the surface of the convex spherical optical element 201 is driven by the spatial pose adjusting unit 500, and is planned according to the scanning path planning module 1000 described in step 2.
  • the image coordinate system X c Y c is converted into the image coordinate system X i Y i to obtain an imaging sub-aperture image 1210, as shown in process 1263 in FIG. As shown in FIG.
  • the X c axis 1001c and the Y c axis 1002c constitute an image plane coordinate system X c Y c whose coordinate origin is the intersection of the optical axis 415 of the microscopic scattering dark field imaging unit 400 and the imaging subaperture image 1210;
  • the X i axis 1001i and the Y i axis 1002c constitute an image coordinate system X i Y i , and its coordinate origin O i point 1004i is the upper left corner of the acquired digital image.
  • the spherical three-dimensional reconstruction process described in step 3-1 refers to simplifying the imaging process of the micro-scattering dark-field imaging unit 400 into a small-hole imaging model, and then using the geometric relationship to image.
  • the subaperture image 1210 is reconstructed into a three dimensional subaperture image 1220.
  • the process of acquiring the full-caliber projection image described in step 3-2 specifically includes the following steps:
  • the spherical subaperture image 1230 is vertically projected onto the plane as shown in process 1266 of FIG. 19 to obtain a projected subaperture image 1240, thereby reducing the amount of data characterizing a subaperture image and greatly simplifying subsequent feature extraction. Calculated amount
  • the projection sub-aperture image stitching is directly spliced by a weft layer, and the warp layer is loop-stitched. As shown in Figure 22, the projection sub-aperture image stitching process is as follows:
  • Step 3-3 performs low-power feature extraction on the obtained full-caliber projection image, and then uses the defect calibration module 1400 to obtain spherical defect calibration data to realize detection of the actual size of the defect; finally, the defect is obtained through three-dimensional inverse reconstruction.
  • the true size and the positional coordinates of the defect on the surface of the convex spherical optical element 201 are obtained as follows:
  • the three-dimensional back-projection of the surface of the convex spherical optical element 201 and the number of pixels of the positional coordinates can be obtained through the defect three-dimensional back projection, and the three-dimensional back projection process of the defect is as shown in the process 1267 in FIG. 20;
  • the number of pixels of the three-dimensional size and position coordinates of the defect is converted into an actual size and a position coordinate.
  • the spherical defect calibration data described in steps 3-3 and 3-4 includes defect length calibration data and defect width calibration data.
  • the size and position coordinates of the defect obtained after passing through the image processing module 1100 are all in units of pixels, and thus pass the defect calibration module 1400.
  • the relationship between the number of pixels of the subaperture image at any position of the spherical surface and the actual size is established, and the actual length, width, and position coordinates of the defect can be obtained.
  • the length calibration process is to obtain the relationship between the actual length of the standard line segment at any position on the spherical surface and the number of pixels of the spherical subaperture image. As shown in Figure 23, the length calibration data is obtained as follows:
  • d l is the length of the 1420 standard measuring instrument.
  • the standard line segment d l 1420 is imaged by the micro-scattering dark field imaging unit 400, and its image d p 1410 is obtained on the imaging sub-aperture image 1210.
  • the coordinates of each pixel point of the defect are first obtained by feature extraction, and the continuous defects are discretized into n line segments according to the position coordinates of the pixel points, and the line segment equation is obtained.
  • the back projection reduction process is performed for each line segment, and the arc C i corresponding to the line segment l i on the spherical surface with radius R pixel is obtained, and the defect pixel length can be obtained according to the sphere area formula:
  • the width calibration process is to obtain the relationship between the actual width of the standard line segment at any position on the spherical surface and the number of pixels of the three-dimensional subaperture image.
  • the width calibration results at low magnification are only for reference and cannot be used as evaluation results, so the width should be scaled and evaluated at high magnification.
  • the width calibration process can take a similar calibration method as the length calibration process.
  • the width calibration data is obtained as follows:
  • a standard line segment is taken through the tangent plane 1250 of the origin in the three-dimensional coordinate system, and its actual width 1420w is measured by a standard measuring instrument.
  • the standard line segment is imaged by the micro-scattering dark field imaging unit 400, and its image is obtained on the imaging sub-aperture image 1210, and the image plane width pixel number is 1410w.
  • the image forming sub-aperture image 1210 is reconstructed into a three-dimensional sub-aperture image 1220, and a spherical image of a standard line segment is obtained on the three-dimensional sub-aperture image 1220, and the number of arc length pixels 1430w in the width direction is the number of defective width pixels.
  • the feature Since the feature is located at the center of the field of view when the high-power image is acquired, the information compression in the direction of the imaging optical axis is negligible, so the actual width of the defect is equal to the actual width 1420w of the standard line segment.
  • the discrete point 1450 of the correspondence between the actual width of the defect and the number of pixels of the defect width is segmentally fitted to obtain an optimal fitting curve, that is, the scaling transfer function 1460.
  • the scaling transfer function 1460 By using the scaling transfer function 1460, the actual width corresponding to the number of pixels of any width of the sphere can be calculated, and the width scaling is completed.
  • Embodiment 2 of Embodiment 2 of the present invention will be described in detail in conjunction with FIGS. 26-30.
  • a spherical optical element surface defect evaluation system and a method thereof for evaluating a concave spherical optical element will be described in Embodiment 2.
  • the spherical optical element surface defect evaluation system for evaluating a concave spherical optical element in Embodiment 2 of the present invention and the method thereof are similar to the spherical optical element surface defect evaluation system and method for evaluating the convex spherical optical element in Embodiment 1 of the present invention.
  • the parts in Figures 26-30 that are related to Figures 1-25 will be given the same reference numerals.
  • the focus discussed in Embodiment 2 will also lie in a different portion from Embodiment 1.
  • Fig. 26 is a view showing an illumination light path corresponding to the second embodiment.
  • the uniform surface light source 320 emits parallel light, and the parallel light passes through the spherical light source mirror group 330 to form a concentrated beam of the aperture angle ⁇ l .
  • the specific process is as follows: First, the zoom lens group 332 is calculated on the spherical light source mirror according to the radius of curvature of the concave spherical optical element 1501. The position in the group 330 moves the zoom lens group 332 to the calculated position; secondly, the parallel light emitted by the uniform surface light source 320 enters the spherical light source lens group 330, and sequentially passes through the front fixed mirror group 331, the zoom lens group 332, and the rear fixed mirror. After group 333, a converging beam of aperture angle ⁇ l is formed.
  • Fig. 27 is a view showing the relationship between the radius of curvature of the concave spherical optical element and the aperture angle ⁇ l of the spherical light source when the incident angle ⁇ is 40° corresponding to Fig. 26. It can be seen that as the radius of curvature increases, the aperture angle ⁇ l of the spherical source decreases, and the illumination range of the light source received on the surface thereof is correspondingly reduced, and the aperture angle ⁇ l is less than or equal to 12°. Comparing Fig. 27 with Fig.
  • the aperture angle formed by the spherical light source irradiating the concave spherical optical element is smaller than the aperture angle formed by the convex spherical optical element irradiating the same radius of curvature, and the aperture angle is sharper as the radius of curvature increases.
  • the lowering, the corresponding critical radius of curvature is smaller when the aperture angle is 0°.
  • Fig. 28 is a view showing the configuration of a spherical centering unit 600 corresponding to the second embodiment.
  • the centering optical path of the concave spherical optical element 1501 is similar to the centering optical path of the convex lens optical element 201 in the first embodiment.
  • Judging from the position of the crosshairs in the picture, the relative position of the curvature center 1502 of the concave spherical optical element and the reticle image 1710 is determined. The judgment process is as follows:
  • Fig. 29A is a view showing the optical path when the reticle image 1710a corresponding to Fig. 28 and the position of the center of curvature 1502 of the concave spherical optical element are shifted in the Z direction. At this time, the incident light passing through the spherical image does not overlap with the reflected light, so that a blurred crosshair image is obtained on the CCD 609 as shown in Fig. 29B.
  • FIG. 30A is a light path diagram when the reticle image 1710b corresponding to FIG. 28 and the position of the center of curvature 1502 of the concave spherical optical element have X and Y directions.
  • the optical axis 1505 of the concave spherical optical element does not coincide with the optical axis 615 of the spherical centering unit, and the reflected light is focused on the CCD 609 through the imaging mirror to form a cross-hair image with clear focus but not in the center of the field of view, as shown in FIG. 30B.
  • the relative position of the center of curvature 1502 of the concave spherical optical element in the three-dimensional space can be determined by the above analysis using the different states of the cross-hair image on the CCD 609.
  • Example 2 describes the evaluation of concave spherical light Spherical optical component surface defect evaluation system and method thereof when learning components.
  • the defect evaluation method is the same as that of the first embodiment, but since the surface shape is different from the convex spherical optical element, the illumination unit 300 and the spherical centering unit 600 are also different.
  • Embodiment 3 of the present invention will be described in detail in conjunction with Figs. 31-33.
  • a spherical optical element surface defect evaluation system and a method thereof for evaluating a small-diameter spherical optical element will be described.
  • the parts in Figures 31-33 that are related to Figures 1-25 will be given the same reference numerals.
  • the focus discussed in Embodiment 3 will also lie in a different portion from Embodiment 1.
  • the small-diameter spherical optical element 1801 discussed in this embodiment is characterized in that its aperture is smaller than the illumination aperture of the illumination unit 300 and the object-side field of view of the micro-scattered dark-field imaging unit 400, so the micro-scattering dark-field imaging unit 400 only needs to
  • a full aperture imaging of the entire small-diameter spherical optic surface can be obtained by imaging a sub-aperture at the spherical apex 1009 (shown in Figure 15). As shown in FIGS.
  • the small-caliber spherical optical element surface defect evaluation system and method thereof in Embodiment 3 do not require a scan path planning module, and the image processing module 2000 only needs to complete the processing of the single sub-aperture. Accordingly, the defect evaluation method 1900 is also simpler than Embodiment 1 and Embodiment 2, and the specific flow is as follows.
  • the defect evaluation method 1900 includes a spherical automatic centering module 900, an image processing module 2000, and a defect calibration module 1400.
  • Step 1 Automatically centering the spherical surface by the spherical automatic centering module 900;
  • Step 2 The subaperture image is processed by the image processing module 2000 and the defect calibration module 1400 to obtain spherical defect information.
  • the sub-aperture image is processed by the image processing module 2000 and the defect calibration module 1400 to obtain spherical defect information.
  • the method includes the following steps:
  • the obtained imaging sub-aperture image is a two-dimensional image, so the spherical three-dimensional reconstruction is first performed to correct the small-diameter spherical surface.
  • the surface defects of the optical element 1801 are compressed by information in the direction of the imaging optical axis generated by optical imaging.
  • Three-dimensional subaperture image is obtained after three-dimensional reconstruction of the spherical surface.
  • the information of the three-dimensional subaperture image is projected onto the two-dimensional plane by single subaperture projection to obtain a single subaperture projection image.
  • the imaging magnification of the micro-scattering dark field imaging unit 400 is adjusted to a high magnification; then, according to the position coordinates obtained in step 2-3, the surface defect is moved to the center of the high power field, and high-power image acquisition is performed;
  • the spherical defect calibration data obtained by the defect calibration module 1400 obtains a defect evaluation result of the order of micrometers.
  • the defect evaluation results are outputted in the form of a spherical three-dimensional preview, an electronic report, and a defect position map.

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

一种球面光学元件表面缺陷评价系统及其方法。所述评价系统(100),包括缺陷成像子系统(200)和控制子系统(700),缺陷成像子系统(200)用于获取适用于数字图像处理的显微散射暗场图像,控制子系统(700)用于控制缺陷成像子系统内各元器件的运动,实现球面光学元件表面图像的采集;缺陷成像子系统(200)包括照明单元(300)、显微散射暗场成像单元(400)、空间位姿调整单元(500)和球面定中单元(600);照明单元(300)、显微散射暗场成像单元(400)、空间位姿调整单元(500)和球面定中单元(600)的运动及调整都是在控制子系统(700)的驱动控制下完成。从而实现了球面光学元件表面缺陷的自动化定量检测,提高了检测效率及检测精度。

Description

球面光学元件表面缺陷评价系统及其方法 技术领域
本发明属于机器视觉检测技术领域,具体涉及一种球面光学元件表面缺陷评价系统及其方法。
背景技术
球面光学元件在大口径空间望远镜、惯性约束聚变(ICF)系统、高能激光等系统中被广泛应用,元件表面的缺陷特征如划痕、麻点等不但会影响光学系统成像质量,其在高能激光系统中还会产生不必要的散射与衍射从而造成能量损失,该能量损失在高功率激光系统中还可能因为能量过高而造成二次损伤,因此有必要在球面光学元件的使用前进行其表面缺陷的检测,数字化评价缺陷信息,从而为球面光学元件的使用提供可靠的数值依据。
球面光学元件表面缺陷的传统检测方法主要是目视法,利用强光照射球面表面,人眼利用反射光和透射光从不同方向进行观察,目视法受检测者熟练程度的影响较大,主观性较强,而且长期的检测会造成人眼疲劳,同时无法给出缺陷信息的定量化描述。因此需要设计一种球面光学元件表面缺陷评价系统和方法,能够实现球面光学元件表面缺陷的自动化评价,利用机器视觉的方法代替人工目视法,极大的提高检测效率及检测精度。
发明内容
本发明的目的是针对现有技术的不足,为解决球面光学元件表面缺陷的自动化检测,提供一种球面光学元件表面缺陷评价系统及其方法。
本发明基于显微散射暗场成像原理,对球面光学元件表面进行子孔径图像扫描,之后利用图像处理方法得到表面缺陷信息。本发明充分利用了环形照明光束在照射到球面元件表面时,球面光学元件表面缺陷会激发散射光的特性,进行覆盖被测球面全口径的子孔径图像扫描并成像,同时利用球面子孔径图像全局校正、三维拼接、二维投影、 数字化特征提取等检测球面缺陷。利用缺陷定标数据,可以定量给出缺陷的尺寸和位置信息。
球面光学元件表面缺陷评价系统,包括缺陷成像子系统和控制子系统,缺陷成像子系统用于获取适用于数字图像处理的显微散射暗场图像,控制子系统用于控制缺陷成像子系统内各元器件的运动,实现球面光学元件表面图像的采集;其特征在于缺陷成像子系统包括照明单元、显微散射暗场成像单元、空间位姿调整单元和球面定中单元;照明单元用于为显微散射暗场成像单元成像提供所需要的暗场照明光;显微散射暗场成像单元用于收集元件表面的散射光并成像;空间位姿调整单元用于实现五维的空间位置及姿态调整,不仅能够实现空间三维的平移,还能够实现元件的旋转和摆动,便于对表面不同位置处清晰成像;球面定中单元用于确定凸球面光学元件的曲率中心的位置;照明单元、显微散射暗场成像单元、空间位姿调整单元和球面定中单元的运动及调整都是在控制子系统的驱动控制下完成。
照明单元包括球面光源、光源旋转支架,其中球面光源包括均匀面光源、球面光源镜组,其中球面光源镜组内依次安装有前固定镜组、变焦镜组、后固定镜组,且球面光源镜组的光轴与显微散射暗场成像单元的光轴的夹角为入射角γ,入射角γ的角度范围为25°-45°。
光源旋转支架包括顶部固定板、内圈转轴、蜗轮、蜗杆、伺服电机、电机支座、轴承、外圈转动件、光源固定支架;其中球面光源固定在光源固定支架上,光源固定支架固定在外圈转动件上;外圈转动件通过轴承与内圈转轴活动连接;外圈转动件上安装有蜗轮;蜗轮与蜗杆活动连接并在伺服电机驱动下实现圆周转动;伺服电机通过电机支座与内圈转轴一起固定于顶部固定板上,顶部固定板固定在Z轴导轨上,光源旋转支架用于完成球面表面缺陷的全方向照明。
三个球面光源通过光源固定支架以120°间隔周向均匀分布在外圈转动件上。
照明单元的光路形成如下:首先根据球面光学元件的曲率半径计算出变焦镜组在球面光源镜组中的位置,然后将变焦镜组移动到计算出的位置;其次均匀面光源发出的平行光进入球面光源镜组,依次通过前固定镜组、变焦镜组、后固定镜组后形成孔径角θl的会聚光束。
所述的显微散射暗场成像单元利用光滑表面缺陷对入射光产生调制而激发出的散射光实现缺陷的显微暗场成像,得到缺陷的暗场图像;其原理如下:入射光线入射到球面光学元件表面,当球面表面光滑时,依据几何光学反射定律,入射光线在球面表面发生反射,形成的反射光线不会进入显微散射暗场成像单元;当球面表面存在表面缺陷时,入射光线会引起散射,产生散射光线并被显微散射暗场成像单元接收,形成缺陷暗场图像。
所述的空间位姿调整单元包括X轴导轨、Y轴导轨、Z轴导轨、自旋转台、摆动台和自定心夹持机构;摆动台包括内板和外板;自定心夹持机构与自旋转台上的旋转轴固定连接,自旋转台的底座固定在摆动台的内板上;内板和外板活动连接,使得内板能够相对外板摆动;内板和外板的截面均为U型;摆动台的外板下底面固定在Y轴导轨的工作台面上,Y轴导轨固定在X轴导轨的工作台面上;X轴导轨和Z轴导轨固定在同一平台上。
所述的球面定中单元包括光源、光源聚焦镜组、分划板、准直透镜、分束器、物镜、反光镜、成像镜和CCD;光源发出的光经过光源聚焦镜组照射在分划板上,分划板上刻有十字叉丝;随后光线经过准直透镜透射后进入分束器,经过分束器透射后再通过物镜照射在球面光学元件上,并在其表面发生反射,此时分划板上的十字叉丝所成的像为分划板像;反射光重新经过物镜后进入分束器,并在分束器发生反射;随后反射光通过反光镜反射以及成像镜最终聚焦在CCD上,将分划板上的十字叉丝成像在CCD上。
所述的控制子系统包括定中控制模块、照明控制模块、五维移导控制模块和图像采集控制模块;定中控制模块包括定中图像采集单元和四维移导控制单元;定中图像采集单元用于控制球面定中单 元中CCD完成十字叉丝图像的采集;四维移导控制单元用于定中过程中控制X轴导轨、Y轴导轨、Z轴导轨的运动和自旋转台的旋转;照明控制模块包括光源旋转控制单元和光源变焦控制单元;光源旋转控制单元控制照明单元中光源旋转支架的旋转;光源变焦控制单元驱动变焦镜组移动,改变出射的会聚光束的孔径角θl;五维移导控制模块用于缺陷检测过程中控制X轴导轨、Y轴导轨、Z轴导轨的运动,自旋转台的旋转和摆动台的摆动;图像采集控制模块包括子孔径图像采集单元和显微镜变倍控制单元;子孔径图像采集单元用于控制显微散射暗场成像单元完成子孔径图像的采集;显微镜变倍控制单元用于改变显微散射暗场成像单元的成像放大倍率。
球面光学元件表面缺陷评价系统评价方法,包括球面自动定中模块、扫描路径规划模块、图像处理模块和缺陷定标模块;球面自动定中模块用于完成球面光学元件的自动定中,实现其曲率半径的精确测量以及自旋转轴和球面光学元件的光轴的一致性调整;扫描路径规划模块用于规划最优球面扫描路径;图像处理模块用于实现高精度的球面表面缺陷检测;缺陷定标模块用于建立球面任意位置处子孔径图像的像素数与实际尺寸的关系,获得缺陷的实际尺寸信息,具体包括如下步骤:
步骤1、通过球面自动定中模块对球面进行自动定中;
步骤2、通过扫描路径规划模块规划最优球面扫描路径,完成球面全孔径扫描;
步骤3、通过图像处理模块和缺陷定标模块对子孔径图像进行处理,得到球面缺陷信息。
步骤1所述的通过球面自动定中模块对球面进行自动定中,具体包括如下步骤:
1-1.初始化球面定中单元;
1-2.将球面光学元件移动至初始位置,所述的初始位置是球面光学元件的光轴与球面定中单元的光轴大体重合的位置;
1-3.控制Z轴导轨沿Z方向进行扫描,并在扫描的过程中利用图像熵清晰度评价函数找到最清晰的十字叉丝像;
1-4.判断十字叉丝为表面像还是球心像,具体判断如下:
微调X轴导轨和Y轴导轨,观察视场中的十字叉丝是否跟随导轨移动,如果跟随导轨移动则得到十字叉丝像与球面光学元件球心重合的像,将其称之为球面光学元件的球心像,并跳转至步骤1-5;反之则得到十字叉丝像与球面光学元件表面重合的像,将其称之为球面光学元件的表面像,跳转至步骤1-9;
1-5.移动X轴导轨和Y轴导轨使十字叉丝像到视场中心,从而球面光学元件的光轴与球面定中单元的光轴重合;
1-6.利用光学装调中采用的旋转测量法测量自旋转轴所在的位置,具体如下:
自定心夹持机构下方安装有自旋转台,使凸球面光学元件能够进行自旋转动;且每次待自旋转台旋转30°后,CCD采集一幅十字叉丝像,随着自旋角度的不同,十字叉丝像在CCD视场上的位置也不同,大体轨迹为一个圆,其中圆心就是自旋转轴所在的位置;
1-7.通过最小二乘法最佳圆拟合方法拟合十字叉丝像的中心的运动轨迹,从而得到运动轨迹的圆心;计算每幅十字叉丝像的中心到圆心的距离,完成十字叉丝像最大偏差的计算;
1-8.对最大偏差进行判断,若最大偏差在容许误差范围内,则完成轴系一致性调整;若最大偏差大于最大容许误差,则说明球面光学元件的光轴与自旋转轴不重合,此时先通过调节自定心夹持机构使得十字叉丝像的中心移动至轨迹圆圆心,然后跳转至步骤1-5;
1-9.移动Z轴导轨至初始化得出的理论曲率中心位置处,然后控制Z轴导轨沿Z方向进行扫描,并在扫描的过程中找到最清晰的十字叉丝像,然后跳转步骤1-5;同时记录Z轴从表面像移动至球心像的距离,从而得到凸球面光学元件的曲率半径,即Z轴移动的距离。
步骤2所述的通过扫描路径规划模块规划最优球面扫描路径,完成球面全孔径扫描,具体包括如下步骤:
2-1.在步骤1轴系一致性调整获得基准位置的基础上利用空间位姿调整单元将球面光学元件移动至显微散射暗场成像单元正下 方,利用显微散射暗场成像单元在球面顶点处采集子孔径图像,此处定义球面坐标系XsYsZs,其中球面坐标系原点Os点为球面光学元件的曲率中心位置,且Zs轴经过球面顶点位置;为获得覆盖被测表面全表面的采样,需要绕Xs轴摆动与绕Zs轴转动的两维运动的组合运动,运动时采用经纬线扫描轨迹;
2-2.球面光学元件通过绕Xs轴摆动β1角度,在经线上采集得到子孔径图像;然后绕Zs轴自旋转动α1角度,在纬线上采集得到子孔径图像;
2-3.每次待绕Zs轴自旋转动相同的α1角度后,在纬线上采集子孔径图像,从而得到多幅子孔径图像;
2-4.在纬线上子孔径图像采集完成后,球面光学元件再次绕Xs轴摆动β2角度,在经线上采集得到子孔径图像;
2-5.每次待绕Zs轴自旋转动相同的α2角度后,在纬线上采集子孔径图像,得到多幅子孔径图像,球面光学元件再次绕Xs轴摆动β2角度采集下一个纬线层上的多幅子孔径图像,以此类推完成覆盖全表面的采样过程。
步骤2所述的通过扫描路径规划模块规划最优球面扫描路径,完成球面全孔径扫描,其特征在于首先建立球面子孔径规划模型,在该模型中,子孔径图像A和子孔径图像B分别为在经线C上采集到的两幅相邻子孔径图像,子孔径图像Aa为子孔径图像A所在的纬线D1上采集到的相邻子孔径图像,而子孔径图像Bb为子孔径图像B所在的纬线D2上采集到的相邻子孔径图像;同时,记子孔径图像Aa与子孔径图像A在底部的交点为Pcd,子孔径图像B与子孔径图像Bb在顶部的交点为Pcu;则实现子孔径无漏检测的充分条件为圆弧长
Figure PCTCN2015089217-appb-000001
小于等于
Figure PCTCN2015089217-appb-000002
在此约束条件下,建立摆动角度β1、β2和自旋角度α1、α2之间的对应关系,并求解得到规划结果,所述的摆动角度β1、β2和自旋角度α1、α2大小的求解如下:
①确认球面光学元件的相关参数,该参数包括球面光学元件的曲率半径、口径、显微散射暗场成像单元的物方视场大小;
②依据上述三个参数给定初始摆动角度β1和初始摆动角度 β2,之后依据纬线上相邻子孔径图像重叠范围一致计算自旋角度α1和α2;然后计算弧长
Figure PCTCN2015089217-appb-000003
Figure PCTCN2015089217-appb-000004
③比较
Figure PCTCN2015089217-appb-000005
Figure PCTCN2015089217-appb-000006
的大小以确定β2的取值是否合适,若
Figure PCTCN2015089217-appb-000007
则按5%的比例减小β2的值,然后跳转至步骤②,若
Figure PCTCN2015089217-appb-000008
则完成覆盖全表面的子孔径规划。
步骤3所述的通过图像处理模块和缺陷定标模块对子孔径图像进行处理,得到球面缺陷信息,具体包括如下步骤:
3-1.球面光学元件经过显微散射暗场成像单元成像在像面上时,得到的成像子孔径图像为二维图像;由于在光学成像过程中会发生沿成像光轴方向的信息压缩,因此首先要进行球面三维重构,矫正球面光学元件的表面缺陷经过光学成像时产生的沿成像光轴方向的信息压缩,所述的球面三维重构过程,是指将显微散射暗场成像单元的成像过程,简化为小孔成像模型,再利用几何关系将成像子孔径图像重构为三维子孔径图像;
3-2.经球面三维重构后得到三维子孔径图像,为了便于特征提取,通过全口径投影将三维子孔径图像的信息投影到二维平面上,得到全口径投影图像;
3-3.对得到的全口径投影图像进行低倍特征提取,然后利用三维逆重构得到缺陷的三维尺寸;最后利用缺陷定标模块得到的球面缺陷定标数据,实现缺陷实际尺寸的检测,并获得缺陷在球面光学元件表面上的位置坐标;
3-4.对缺陷进行高倍检测,保证微米量级的检测精度;首先调节显微散射暗场成像单元的成像倍率至高倍;然后依据步骤3-3所得的位置坐标,将表面缺陷移动至高倍视场中心,进行高倍图像采集;再进行高倍特征提取,并利用缺陷定标模块得到的球面缺陷定标数据得到微米量级的缺陷评价结果;
3-5.对缺陷评价结果以球面三维预览图、电子报表及缺陷位置示意图的方式进行缺陷评价信息输出。
步骤3-1所述的球面光学元件经过显微散射暗场成像单元成像在像面上得到成像子孔径图像的过程如下,具体如下:
3-1-1.球面光学元件表面上一点p在空间位姿调整单元带动下,依据步骤2所述的扫描路径规划模块规划的最优球面扫描路径,运动至p′点处;
3-1-2.在显微暗场散射成像单元的放大倍率为低倍时对子孔径图像进行采集;p′经过显微散射暗场成像单元成像后得到在成像子孔径上的像点为p″;
3-1-3.通过数字图像采集过程,将像面坐标系XcYc转换为图像坐标系XiYi,得到成像子孔径图像,Xc轴和Yc轴组成像面坐标系XcYc,其坐标原点为显微散射暗场成像单元的光轴与成像子孔径图像的交点;Xi轴与Yi轴组成图像坐标系XiYi,其坐标原点Oi点为采集得到的数字图像左上角点。
步骤3-2所述的全口径投影图像的获取过程,具体包括如下步骤:
3-2-1.对重构后的三维子孔径图像进行球面子孔径全局坐标变换,将三维子孔径图像经过全局坐标变换转换为球面子孔径图像;
3-2-2.将球面子孔径图像垂直投影至平面得到投影子孔径图像;
3-2-3.通过球面子孔径图像垂直投影至平面上得到投影子孔径图像,然后进行投影子孔径图像拼接,在平面上得到上述缺陷的位置和尺寸信息后再对其进行逆重构,从而实现球面光学元件表面缺陷的准确检测;投影子孔径图像拼接采用纬线层直接拼接,经线层环形拼接的方式;投影子孔径图像拼接过程如下:
①对投影子孔径图像去噪,从而去除背景噪声对投影拼接精度的影响;
②对去噪后的同一纬线层上相邻投影子孔径图像的重叠区域进行特征配准;
③对同一纬线层上配准后的相邻子孔径图像进行拼接,得到纬线层环带图像;
④提取纬线层环带图像中包含所有重叠区的最小圆环;
⑤提取最小圆环的配准点,获取最佳匹配位置,完成投影子孔 径拼接过程。
步骤3-3所述的对得到的全口径投影图像进行低倍特征提取,然后利用缺陷定标模块得到球面缺陷定标数据,实现缺陷实际尺寸的检测;最后通过三维逆重构得到缺陷的真实尺寸,并获得缺陷在球面光学元件表面上的位置坐标,具体如下:
3-3-1.在投影子孔径图像拼接后的二维全孔径图像上,进行缺陷的特征提取,获取缺陷的尺寸和位置信息;
3-3-2.经过缺陷三维逆投影得到球面光学元件表面缺陷的三维尺寸和位置坐标的像素数;
3-3-3.利用缺陷定标模块得到的球面缺陷定标数据,将缺陷的三维尺寸和位置坐标的像素数转化为实际尺寸和位置坐标。
步骤3-3和3-4中所述的球面缺陷定标数据包括缺陷长度定标数据和缺陷宽度定标数据;长度定标过程就是要获得球面上任意位置处标准线段的实际长度与球面子孔径图像的像素数之间的关系,长度定标数据获取方式如下:
首先在平面物面上取一条标准线段dl,dl的长度通过标准测量仪器测量;标准线段dl经显微散射暗场成像单元成像,在成像子孔径图像上得到其像dp
然后将该幅成像子孔径图像重构为三维子孔径图像,在三维子孔径图像上得到标准线段的球面像dc,此时dc以像素数为单位,同时通过dc得到其所对应的圆弧角dθ;由于球面光学元件的曲率半径R能够通过球面定中过程精确测量得到,因此dc所对应的实际尺寸d=Rdθ;通过寻找dc和d之间的对应关系,定标得到三维子孔径图像的像素数与实际尺寸的对应关系,即k=d/dc,将d=Rdθ代入,得k=Rdθ/dc,而dc=Rpixeldθ,其中Rpixel为重构后的三维球面图像的曲率半径,简称为像素曲率半径,从而得到定标系数k=R/Rpixel;在同一个球面光学元件上的表面缺陷在提取其长度时,首先通过特征提取获得缺陷的各像素点位置坐标,依据像素点位置坐标,将连续的缺陷离散为n条线段,得到线段方程li:yi=kixi+bi,其中i=1,2,3...n;针对各线段分别进行逆投影还原过程,得到线段li在以Rpixel为半径的球面上所对应 的弧线Ci,并依据球面积分公式得到缺陷像素长度:
Figure PCTCN2015089217-appb-000009
其中ds为曲线微元;将定标系数k代入后,得到缺陷的实际长度
Figure PCTCN2015089217-appb-000010
宽度定标数据获取方式如下:
首先,在三维坐标系中经过原点的切平面上取一条标准线段,其实际宽度由标准测量仪器测量;标准线段经显微散射暗场成像单元成像,在成像子孔径图像上得到其像;
然后,将该幅成像子孔径图像重构为三维子孔径图像,在三维子孔径图像上得到标准线段的球面像,其沿宽度方向的弧长像素数即为缺陷宽度像素数;由于采集高倍图像时,特征位于视场的中心,沿成像光轴方向的信息压缩可以忽略,因此缺陷的实际宽度与标准线段的实际宽度相等;
将缺陷的实际宽度与缺陷宽度的像素数的对应关系的离散点进行分段拟合,获得最佳的拟合曲线,即为定标传递函数;利用定标传递函数计算球面任意的宽度像素数对应的实际宽度。
本发明实现了球面光学元件表面缺陷的自动化定量检测,不仅使检测者从繁重的目视检测中解放出来,更极大地提高了检测效率及检测精度,避免了因个人主观因素对检测结果的影响,最终为球面光学元件的使用与加工提供可靠的数值依据。
附图说明
图1所示是与实施例1实施例1和实施例2实施例2对应的球面光学元件表面缺陷评价系统及其方法的组成框图。
图2所示是与图1对应的详细展示球面光学元件表面缺陷评价系统及其方法的各个部分的示意图。
图3所示是与图1对应的照明单元的结构图。
图4所示是与实施例1实施例1对应的照明光路图。
图5所示是与图4对应的当入射角为40°时凸球面光学元件的 曲率半径与球面光源的孔径角之间的关系曲线。
图6所示是显微散射暗场成像的原理图。
图7所示是与实施例1实施例1对应的球面定中单元的结构图。
图8A所示是与图7对应的分划板像与凸球面光学元件的曲率中心的位置有Z向偏差时的光路图。
图8B所示是与图7对应的分划板像与凸球面光学元件的曲率中心的位置有Z向偏差时CCD视场内十字叉丝像的示意图。
图9A所示是与图7对应的分划板像与凸球面光学元件的曲率中心的位置有X和Y向偏差时的光路图。
图9B所示是与图7对应的分划板像与凸球面光学元件的曲率中心的位置有X和Y向偏差时CCD视场内十字叉丝像的示意图。
图10所示是与图1对应的控制子系统的组成框图。
图11A所示是与图10对应的球面定中状态时相关控制的示意图。
图11B所示是与图10对应的球面缺陷检测状态时相关控制的示意图。
图12所示是与图1对应的球面自动定中模块的流程图。
图13A所示是与图12对应的图像熵清晰度评价函数的曲线。
图13B所示是与图12对应的拟合十字叉丝运动轨迹圆圆心的示意图。
图14所示是与图1对应的子孔径图像扫描过程的示意图。
图15所示是与图14对应的子孔径图像规划模型的示意图。
图16所示是与图14对应的扫描路径规划模块的流程图。
图17所示是与图1对应的图像处理模块的流程图。
图18所示是与图17对应的三维子孔径图像成像过程的示意图。
图19所示是与图17对应的三维子孔径图像重构,球面子孔径图像拼接及全口径投影的示意图。
图20所示是与图17对应的投影子孔径图像逆重构的示意图。
图21所示是与图17对应的全口径投影的流程图。
图22所示是与图21对应的全口径投影拼接过程的流程图。
图23所示是球面缺陷长度低倍定标过程的示意图。
图24所示是球面缺陷宽度高倍定标过程的示意图。
图25所示是与图24对应的宽度定标传递函数曲线。
图26所示是与实施例2实施例2对应的照明光路图。
图27所示是与图26对应的当入射角为40°时凹球面光学元件的曲率半径与球面光源的孔径角之间的关系曲线。
图28所示是与实施例2实施例2对应的球面定中单元的结构图。
图29A所示是与图28对应的分划板像与凹球面光学元件的曲率中心的位置有Z向偏差时的光路图。
图29B所示是与图28对应的分划板像与凹球面光学元件的曲率中心的位置有Z向偏差时CCD视场内十字叉丝像的示意图。
图30A所示是与图28对应的分划板像与凹球面光学元件的曲率中心的位置有X和Y向偏差时的光路图。
图30B所示是与图28对应的分划板像与凹球面光学元件的曲率中心的位置有X和Y向偏差时CCD视场内十字叉丝像的示意图。
图31所示是与实施例3实施例3对应的球面光学元件表面缺陷评价系统及其方法的组成框图。
图32所示是与图31对应的详细展示球面光学元件表面缺陷评价系统及其方法的各个部分的示意图。
图33所示是与图31对应的图像处理模块的流程图。
具体实施方式
下面结合附图和实施例对本发明作进一步说明。
本发明可以评价凸球面、凹球面光学元件的表面缺陷。实施例1实施例1适用于利用球面光学元件表面缺陷评价系统及其方法评价凸球面光学元件表面缺陷的情况,实施例2实施例2适用于利用球面光学元件表面缺陷评价系统及其方法评价凹球面光学元件表面缺陷的情况。实施例3实施例3适用于利用球面光学元件表面缺陷评价系统及其方法评价小口径球面光学元件表面缺陷的情况。小口径的球面光学元件只需采集一幅子孔径图像即可得到全口径的暗场图像信息,将使得评价方法更加简化。
下面本发明的实施例将会用上图中的标号进行详细描述,在所有 描述实施例的图中,原则上同一个部件将会用同一个符号表示。
实施例1
下面,本发明的实施例1将用图1-25来详细描述。
在实施例1实施例1中,将描述评价凸球面光学元件时的球面光学元件表面缺陷评价系统及其方法。
图1所示是与实施例1实施例1和实施例2实施例2对应的球面光学元件表面缺陷评价系统及其方法的组成框图。如图1所示,缺陷评价系统100包括缺陷成像子系统200和控制子系统700。缺陷成像子系统200用于获取适用于数字图像处理的显微散射暗场图像。控制子系统700用于控制照明单元300、显微散射暗场成像单元400、空间位姿调整单元500和球面定中单元600的运动,实现凸球面光学元件表面图像的采集。
如图1所示,缺陷成像子系统200包括照明单元300、显微散射暗场成像单元400、空间位姿调整单元500和球面定中单元600。照明单元300用于为显微散射暗场成像单元400成像提供所需要的暗场照明光。显微散射暗场成像单元400用于收集元件表面的散射光并成像。空间位姿调整单元500用于实现五维的空间位置及姿态调整,不仅能够实现空间三维的平移,还能够实现元件的旋转和摆动,便于对表面不同位置处清晰成像。球面定中单元600用于确定凸球面光学元件的曲率中心的位置。照明单元300、显微散射暗场成像单元400、空间位姿调整单元500和球面定中单元600的运动及调整都是在控制子系统700的驱动控制下完成。
所述的照明单元300用于对显微散射暗场成像单元400提供暗场照明。如果用普通的平行光源照明球面光学元件,随着球面曲率半径的不同,未经过球面曲率中心的入射光经球面反射后通过显微散射暗场成像单元400可能会形成明场反射光斑,破坏了暗场照明,因此本系统开发了适用于球面光学元件表面缺陷检测的照明单元300,对不同曲率半径的凸球面光学元件产生孔径角不同的照明光,为凸球面光学元件提供暗场照明。
图3所示是与图1对应的照明单元300的结构图。照明单元300 包括球面光源、光源旋转支架310,其中球面光源包括均匀面光源320、球面光源镜组330,其中球面光源镜组330内依次安装有前固定镜组331、变焦镜组332、后固定镜组333,且球面光源镜组330的光轴与显微散射暗场成像单元的光轴405的夹角为入射角γ,入射角γ的角度范围为25°-45°;
图3所示的光源旋转支架310包括顶部固定板311、内圈转轴312、蜗轮313、蜗杆314、伺服电机315、电机支座316、轴承317、外圈转动件318、光源固定支架319。其中球面光源固定在光源固定支架319上,光源固定支架319固定在外圈转动件318上;外圈转动件318通过轴承317与内圈转轴312活动连接;外圈转动件318上安装有蜗轮313;蜗轮313与蜗杆314活动连接并在伺服电机315驱动下实现圆周转动;伺服电机315通过电机支座316与内圈转轴312一起固定于顶部固定板311上,顶部固定板311固定在Z轴导轨530上。
光源旋转支架310完成球面表面缺陷的全方向照明。三个球面光源301a、301b、301c通过光源固定支架319以120°间隔周向均匀分布在外圈转动件318上,通过光源旋转控制单元721驱动伺服电机315实现光源环形照明。
图4所示是与实施例1实施例1对应的照明光路图。均匀面光源320发射出平行光,平行光经过球面光源镜组330形成孔径角θl的会聚光束,具体过程如下:首先根据凸球面光学元件201的曲率半径计算出变焦镜组332在球面光源镜组330中的位置,将变焦镜组332移动到计算出的位置;其次均匀面光源320发出的平行光进入球面光源镜组330,依次通过前固定镜组331、变焦镜组332、后固定镜组333后形成孔径角θl的会聚光束。
图5所示是与图4对应的当入射角γ为40°时凸球面光学元件的曲率半径与球面光源的孔径角θl的关系。可以看出随着曲率半径增大,球面光源孔径角θl减小,其表面接收到的光源照明范围也相应缩小,孔径角θl小于等于15°。
所述的显微散射暗场成像单元400利用光滑表面缺陷对入射光产生调制而激发出的散射光实现缺陷的显微暗场成像,得到缺陷的暗 场图像。显微散射暗场成像单元400是缺陷评价系统100的机器视觉模块。
图6所示是显微散射暗场成像原理图。入射光线210入射到凸球面光学元件201表面,当球面表面光滑时,依据几何光学反射定律,入射光线210在表面发生反射,形成反射光线212。反射光线212并不会进入显微散射暗场成像单元400。当球面表面存在表面缺陷203时,入射光线210会引起散射,产生散射光线211并被显微散射暗场成像单元400接收,形成缺陷暗场图像。
所述的空间位姿调整单元500完成凸球面光学元件201的任意空间位姿调整。如图2所示,空间位姿调整单元500包括X轴导轨510、Y轴导轨520、Z轴导轨530、自旋转台540、摆动台550和自定心夹持机构560。摆动台550包括内板和外板。自定心夹持机构560与自旋转台540上的旋转轴固定连接,自旋转台540的底座固定在摆动台550的内板上;内板和外板活动连接,使得内板能够相对外板摆动;内板和外板的截面均为U型;摆动台550的外板下底面固定在Y轴导轨520的工作台面上,Y轴导轨520固定在X轴导轨510的工作台面上;X轴导轨510和Z轴导轨530固定在同一平台上;照明单元300、显微散射暗场成像单元400和球面定中单元600都固定在Z轴导轨530上。
所述的球面定中单元600为完成凸球面光学元件201的定中提供硬件基础。图7所示是与实施例1实施例1对应的球面定中单元600的结构图。球面定中单元600中的光源601发出的光经过光源聚焦镜组602照射在分划板603上,分划板603上刻有十字叉丝。随后光线经过准直透镜604透射后进入分束器605,经过分束器605透射后再通过物镜606照射在凸球面光学元件201上,并在其表面发生反射,此时分划板603上的十字叉丝所成的像为分划板像610。反射光重新经过物镜606后进入分束器605,并在分束器605发生反射;随后反射光通过反光镜607反射以及成像镜608最终聚焦在CCD 609上,将分划板603上的十字叉丝成像在CCD 609上。
如图7所示,当经过物镜606的入射光聚焦在凸球面光学元件 201的表面时,反射光与入射光关于球面定中单元的光轴615对称,因此反射光重新经过物镜606时又会变成平行光,最终在CCD 609上形成清晰的十字叉丝像,此时清晰的十字叉丝像称为表面像。表面像在CCD 609视场中的位置不会随凸球面光学元件201在X和Y方向上的微小移动而变化。在球面定中单元600随着Z轴导轨530向下移动的过程中,当经过物镜606的入射光聚焦在凸球面光学元件的曲率中心202时,分划板像610位于凸球面光学元件的曲率中心202,反射光与入射光重合,因此也可以在CCD 609上得到清晰的十字叉丝像,此时清晰的十字叉丝像称为球心像。因此,对于不同曲率半径的凸球面光学元件201,在Z轴导轨530移动的过程中,CCD 609可以采集得到两次清晰的十字叉丝像,分别为表面像和球心像。同时,CCD 609上能够得到十字叉丝图片,因此可以通过图片中十字叉丝的位置及清晰度来判断凸球面光学元件的曲率中心202的位置,判断过程如下:
图8A所示是与图7对应的分划板像610a与凸球面光学元件的曲率中心202的位置有Z向偏差时的光路图。此时,经过球心像的入射光与反射光不重合,因此在CCD 609上会得到模糊的十字叉丝像,如图8B所示。另外,图9A所示是与图7对应的分划板像610b与凸球面光学元件的曲率中心202的位置有X和Y向偏差时的光路图。此时,凸球面光学元件的光轴205与球面定中单元的光轴615不重合,反射光线经过成像镜聚焦在CCD 609上形成对焦清晰但不在视场中心的十字叉丝像,如图9B所示。因此,通过上述分析利用CCD 609上十字叉丝像的不同状态就可以确定凸球面光学元件的曲率中心202在三维空间中的相对位置。
所述的控制子系统700用于完成对缺陷成像子系统200中各个单元的自动化控制,实现球面光学元件表面缺陷的自动化检测。
图10所示是与图1对应的控制子系统700的组成框图。控制子系统700包括定中控制模块710、照明控制模块720、五维移导控制模块730和图像采集控制模块740。
如图10所示,定中控制模块710包括定中图像采集单元711和四维移导控制单元712。定中图像采集单元711用于控制球面定中单 元600中CCD 609完成十字叉丝图像的采集;四维移导控制单元712用于定中过程中控制X轴导轨510、Y轴导轨520、Z轴导轨530的运动和自旋转台540的旋转。
如图10所示,照明控制模块720包括光源旋转控制单元721和光源变焦控制单元722。光源旋转控制单元721控制照明单元300中光源旋转支架310的旋转;光源变焦控制单元722驱动变焦镜组332移动,改变出射的会聚光束的孔径角θl
如图10所示,五维移导控制模块730用于缺陷检测过程中控制X轴导轨510、Y轴导轨520、Z轴导轨530的运动,自旋转台540的旋转和摆动台550的摆动。
如图10所示,图像采集控制模块740包括子孔径图像采集单元741和显微镜变倍控制单元742。子孔径图像采集单元741用于控制显微散射暗场成像单元400完成子孔径图像的采集;显微镜变倍控制单元742用于改变显微散射暗场成像单元400的成像放大倍率。
缺陷评价系统100的工作状态包括球面定中状态和球面缺陷检测状态。图11A所示是与图10对应的球面定中状态时相关控制的示意图。通过空间位姿调整单元500将凸球面光学元件201移动至球面定中单元600的正下方,进入球面定中状态。该状态下,控制子系统700通过定中图像采集单元711和四维移导控制单元712完成球面自动定中。其中四维移导控制单元712控制Z轴导轨530运动进而带动球面定中单元600沿Z向移动实现其自动化精确对焦,控制X轴导轨和Y轴导轨运动进而带动凸球面光学元件201的平移及控制自旋转台540的旋转。
图11B所示是与图10对应的球面缺陷检测状态时相关控制的示意图。通过空间位姿调整单元500将凸球面光学元件201移动至显微散射暗场成像单元400的正下方时,进入球面缺陷检测状态。该状态下,控制子系统700通过照明控制模块720、五维移导控制模块730以及图像采集控制模块740完成凸球面光学元件201的全口径缺陷检测。照明控制模块720包括光源旋转控制单元721和光源变焦控制单元722,其中光源旋转控制单元721实现凸球面光学元件201表面缺 陷的全方位照明,光源变焦控制单元722实现凸球面光学元件201表面缺陷的暗场照明。五维移导控制模块730带动凸球面光学元件201完成空间位姿的精确定位,对凸球面光学元件201表面缺陷进行全口径扫描。子孔径图像采集单元740包括子孔径图像采集单元741和显微镜变倍控制单元742,其中子孔径图像采集单元741完成子孔径图像的采集,用于后续图像处理模块1100;显微镜变倍控制单元742完成显微散射暗场成像单元400成像倍率的自动控制。
控制子系统700是缺陷评价系统100中连接缺陷成像子系统200和缺陷评价方法800的枢纽。控制子系统700一方面实现对缺陷成像子系统200的精确控制,另一方面缺陷成像子系统200得到的图像数据、位置及状态信息也需要控制子系统700传递给缺陷评价算法800处理。控制子系统700实现缺陷成像子系统200与缺陷评价方法800之间信息的快速传递和高效协同处理,完成凸球面光学元件201的自动化扫描并且提高整个系统的检测效率。
所述的缺陷评价方法800的实现包括球面自动定中模块900、扫描路径规划模块1000、图像处理模块1100和缺陷定标模块1400。
球面自动定中模块900用于完成凸球面光学元件201的自动定中,实现其曲率半径的精确测量以及自旋转轴565和凸球面光学元件的光轴205的一致性调整。扫描路径规划模块1000用于规划最优球面扫描路径,从而以尽量少的子孔径图像数目覆盖元件全表面,保证在减少元件动作的基础上,实现无漏检测。图像处理模块1100用于实现高精度的球面表面缺陷检测。缺陷定标模块1400用于建立球面任意位置处子孔径图像的像素数与实际尺寸的关系,获得缺陷的实际尺寸信息。
具体包括如下步骤:
步骤1、通过球面自动定中模块900对球面进行自动定中;
步骤2、通过扫描路径规划模块1000规划最优球面扫描路径,完成球面全孔径扫描;
步骤3、通过图像处理模块1100和缺陷定标模块1400对子孔径图像进行处理,得到球面缺陷信息。
步骤1所述的通过球面自动定中模块900对球面进行自动定中,包括凸球面光学元件201曲率半径的精确测量以及轴系一致性调整,其中轴系一致性调整为调整凸球面光学元件的光轴205和自旋转轴565重合,为步骤2中规划最优球面扫描路径提供规划基准位置。参看图12,具体包括如下步骤:
1-1.初始化球面定中单元600。
1-2.将凸球面光学元件201移动至初始位置,所述的初始位置是凸球面光学元件的光轴205与球面定中单元的光轴615大体重合的位置。
1-3.控制Z轴导轨530沿Z方向进行扫描,并在扫描的过程中利用图像熵清晰度评价函数找到最清晰的十字叉丝像,如图13A所示为图像熵清晰度评价函数的曲线。
1-4.判断十字叉丝为表面像还是球心像,具体判断如下:
微调X轴导轨510和Y轴导轨520,观察视场中的十字叉丝是否跟随导轨移动,如果跟随导轨移动则得到凸球面光学元件201的球心像并跳转至步骤1-5;反之则得到凸球面光学元件201的表面像,跳转至步骤1-9。
1-5.移动X轴导轨510和Y轴导轨520使十字叉丝像到视场中心,从而凸球面光学元件的光轴205与球面定中单元的光轴615重合。
1-6.利用光学装调中采用的旋转测量法测量自旋转轴565所在的位置,具体如下:
自定心夹持机构560下方安装有自旋转台540,使凸球面光学元件201能够进行自旋转动。且每次待自旋转台540旋转30°后,CCD609采集一幅十字叉丝像,随着自旋角度的不同,十字叉丝像在CCD609视场上的位置也不同,大体轨迹为一个圆,如图13B所示,其中圆心910就是自旋转轴565所在的位置。
1-7.通过最小二乘法最佳圆拟合方法拟合十字叉丝像的中心得到运动轨迹,从而得到运动轨迹的圆心。计算每幅十字叉丝像的中心到圆心的距离,完成十字叉丝像最大偏差的计算。
1-8.对最大偏差进行判断,若最大偏差在容许误差范围内,则完成轴系一致性调整;若最大偏差大于最大容许误差,则说明凸球面光学元件的光轴205与自旋转轴565不重合,此时先通过调节自定心夹持机构560使得十字叉丝像的中心移动至轨迹圆圆心,然后跳转至步骤1-5。
1-9.移动Z轴导轨530至初始化得出的理论曲率中心位置处,然后控制Z轴导轨530沿Z方向进行扫描,并在扫描的过程中找到最清晰的十字叉丝像,然后跳转步骤1-5;同时记录Z轴从表面像移动至球心像的距离,从而得到凸球面光学元件201的曲率半径(即Z轴移动的距离)。
在球面定中过程中,调节自定心夹持机构560使十字叉丝像的中心移动至轨迹圆圆心,此时就将凸球面光学元件的光轴205调节至与自旋转轴565重合。移动X轴导轨510和Y轴导轨520使十字叉丝像移动到CCD 609视场中心,此时就将凸球面光学元件的光轴205调节至与球面定中单元的光轴615重合。经过上述调整,凸球面光学元件的光轴205、自旋转轴565与球面定中单元的光轴615重合,此时凸球面光学元件201的位置为扫描路径规划的基准位置。
步骤2所述的通过扫描路径规划模块1000规划最优球面扫描路径,完成球面全孔径扫描,参看图14A-图14F,具体包括如下步骤:
2-1.在步骤1轴系一致性调整获得基准位置的基础上利用空间位姿调整单元500将凸球面光学元件201移动至显微散射暗场成像单元400正下方,利用显微散射暗场成像单元400在球面顶点1009处采集子孔径图像1010,如图14A所示。此处定义球面坐标系XsYsZs,其中球面坐标系原点Os点1004s为凸球面光学元件201的曲率中心位置,且Zs轴1003s经过球面顶点位置1009。为获得覆盖被测表面全表面的采样,需要绕Xs轴1001s摆动与绕Zs轴1003s转动的两维运动的组合运动,运动时采用经纬线扫描轨迹。
2-2.凸球面光学元件201通过绕Xs轴1001s摆动β1角度1007a,在经线1005上采集得到子孔径图像1020,如图14B所示;然后绕Zs轴1003s自旋转动α1角度1008a,在纬线1006a上采集得到子孔径图 像1020a,如图14C所示。
2-3.每次待绕Zs轴1003s自旋转动相同的α1角度1008a后,在纬线1006a上采集子孔径图像,从而得到多幅子孔径图像,如图14D所示。
2-4.在纬线1006a上子孔径图像采集完成后,凸球面光学元件201再次绕Xs轴1001s摆动β2角度1007b,在经线1005上采集得到子孔径图像1030。
2-5.每次待绕Zs轴1003s自旋转动相同的α2角度1008b后,在纬线1006b上采集子孔径图像,如图14F所示,球面光学元件再次绕Xs轴摆动β2角度采集下一个纬线层上的多幅子孔径图像,以此类推完成覆盖全表面的采样过程。
为了实现以尽量少的子孔径数目覆盖元件全表面,保证在减少元件动作的基础上,实现无漏检测,首先建立球面子孔径规划模型,如图15所示。在该模型中,子孔径图像1020和子孔径图像1030分别为在经线1005上采集到的两幅相邻子孔径图像,子孔径图像1020a为子孔径图像1020所在的纬线1006a上采集到的相邻子孔径图像,而子孔径图像1030a为子孔径图像1030所在的纬线1006b上采集到的相邻子孔径图像。同时,记子孔径图像1020与1020a在底部的交点(远离球面顶点1009的交点)为Pcd 1040a,子孔径图像1030与1030a在顶部的交点(靠近球面顶点1009的交点)为Pcu 1040b。则实现子孔径无漏检测的充分条件为圆弧长
Figure PCTCN2015089217-appb-000011
1045b小于等于
Figure PCTCN2015089217-appb-000012
1045a,在此约束条件下,建立摆动角度β1 1007a、β2 1007b和自旋角度α1 1008a、α2 1008b之间的对应关系,并求解得到规划结果,如图16所示,所述的摆动角度β1 1007a、β2 1007b和自旋角度α11008a、α2 1008b大小的求解如下:
①确认凸球面光学元件201的相关参数,该参数包括凸球面光学元件201的曲率半径、口径、显微散射暗场成像单元400的物方视场大小。
②依据上述三个参数给定初始摆动角度β1 1007a和初始摆动角度β2 1007b,之后依据纬线上相邻子孔径图像重叠范围一致计算自旋 角度α1 1008a和α2 1008b。然后计算弧长
Figure PCTCN2015089217-appb-000013
1045b和
Figure PCTCN2015089217-appb-000014
1045a。
③比较
Figure PCTCN2015089217-appb-000015
1045b与
Figure PCTCN2015089217-appb-000016
1045a的大小以确定β2 1007b的取值是否合适,若
Figure PCTCN2015089217-appb-000017
则按5%的比例减小β2 1007b的值,跳转至步骤②,若
Figure PCTCN2015089217-appb-000018
则完成覆盖全表面的子孔径规划。
步骤3所述的通过图像处理模块1100和缺陷定标模块1400对子孔径图像进行处理,得到球面缺陷信息,参看图17,具体包括如下步骤:
3-1.凸球面光学元件201经过显微散射暗场成像单元400成像在像面上时,得到的成像子孔径图像为二维图像。由于在光学成像过程中会发生沿成像光轴方向的信息压缩,因此首先要进行球面三维重构,矫正凸球面光学元件201的表面缺陷经过光学成像时产生的沿成像光轴方向的信息压缩。
3-2.经球面三维重构后得到三维子孔径图像,为了便于特征提取,通过全口径投影将三维子孔径图像的信息投影到二维平面上,得到全口径投影图像。
3-3.对得到的全口径投影图像进行低倍特征提取,然后利用三维逆重构得到缺陷的三维尺寸;最后利用缺陷定标模块1400得到的球面缺陷定标数据,实现缺陷实际尺寸的检测,并获得缺陷在凸球面光学元件201表面上的位置坐标。
3-4.对缺陷进行高倍检测,保证微米量级的检测精度。首先调节显微散射暗场成像单元400的成像倍率至高倍;然后依据步骤3-3所得的位置坐标,将表面缺陷移动至高倍视场中心,进行高倍图像采集;再进行高倍特征提取,并利用缺陷定标模块1400得到的球面缺陷定标数据得到微米量级的缺陷评价结果。
3-5.对缺陷评价结果以球面三维预览图、电子报表及缺陷位置示意图的方式进行缺陷评价信息输出。
步骤3-1所述的凸球面光学元件201经过显微散射暗场成像单元400成像在像面上得到成像子孔径图像的过程如下,具体参看图18:
3-1-1.凸球面光学元件201表面上一点p 1201在空间位姿调整单元500带动下,依据步骤2所述的扫描路径规划模块1000规划 的最优球面扫描路径,运动至p′1202点处,如图18中的过程1261所示;
3-1-2.在显微暗场散射成像单元400放大倍率为低倍时对子孔径图像进行采集;如图18中的过程1262所示,p′1202经过显微散射暗场成像单元400成像后得到在成像子孔径1210上的像点为p″1211;
3-1-3.通过数字图像采集过程,将像面坐标系XcYc转换为图像坐标系XiYi,得到成像子孔径图像1210,如图18中过程1263所示。如图18所示,Xc轴1001c和Yc轴1002c组成像面坐标系XcYc,其坐标原点为显微散射暗场成像单元400的光轴415与成像子孔径图像1210的交点;Xi轴1001i与Yi轴1002c组成图像坐标系XiYi,其坐标原点Oi点1004i为采集得到的数字图像左上角点。
如图19中过程1264所示,步骤3-1所述的球面三维重构过程,是指将显微散射暗场成像单元400的成像过程,简化为小孔成像模型,再利用几何关系将成像子孔径图像1210重构为三维子孔径图像1220。
如图19与图21所示,步骤3-2所述的全口径投影图像的获取过程,具体包括如下步骤:
3-2-1.对重构后的三维子孔径图像1220进行球面子孔径全局坐标变换,如图19中过程1265所示,将三维子孔径图像1220经过全局坐标变换转换为球面子孔径图像1230;
3-2-2.如图19中过程1266所示将球面子孔径图像1230垂直投影至平面得到投影子孔径图像1240,从而减少表征一幅子孔径图像的数据量,并且大大简化了后续特征提取的计算量;
3-2-3.对于分布于多个子孔径图像之间的凸球面光学元件201表面缺陷的检测,也需要先进行子孔径图像的准确拼接,才能提取此类缺陷的位置和大小信息。在三维空间中提取上述缺陷的位置和尺寸信息很困难,所以可以通过球面子孔径图像1230垂直投影至平面上得到投影子孔径图像1240,然后进行投影子孔径图像拼接,在平面上得到上述缺陷的位置和尺寸信息后再对其进行逆重构就可以实现 凸球面光学元件201表面缺陷的准确检测。
投影子孔径图像拼接采用纬线层直接拼接,经线层环形拼接的方式。如图22所示,投影子孔径图像拼接过程如下:
①对投影子孔径图像去噪,从而去除背景噪声对投影拼接精度的影响。
②对去噪后的同一纬线层上相邻投影子孔径图像的重叠区域进行特征配准。
③对同一纬线层上配准后的相邻子孔径图像进行拼接,得到纬线层环带图像。
④提取纬线层环带图像中包含所有重叠区的最小圆环。
⑤提取最小圆环的配准点,获取最佳匹配位置,完成投影子孔径拼接过程。
从图19中可以看出,经过球面子孔径全局坐标变换后不同球面子孔径图像1230垂直投影时的变形量不同,对表面缺陷的压缩量也不同,所以后续的低倍特征提取还需要三维逆重构的过程消除因球面子孔径图像1230垂直投影而产生的压缩变形。
步骤3-3所述的对得到的全口径投影图像进行低倍特征提取,然后利用缺陷定标模块1400得到球面缺陷定标数据,实现缺陷实际尺寸的检测;最后通过三维逆重构得到缺陷的真实尺寸,并获得缺陷在凸球面光学元件201表面上的位置坐标,具体如下:
3-3-1.在投影子孔径图像拼接后的二维全孔径图像上,进行缺陷的特征提取,获取缺陷的尺寸和位置信息;
3-3-2.经过缺陷三维逆投影才能得到凸球面光学元件201表面缺陷的三维尺寸和位置坐标的像素数,缺陷三维逆投影过程如图20中的过程1267所示;
3-3-3.利用缺陷定标模块1400得到的球面缺陷定标数据,将缺陷的三维尺寸和位置坐标的像素数转化为实际尺寸和位置坐标。
步骤3-3和3-4中所述的球面缺陷定标数据包括缺陷长度定标数据和缺陷宽度定标数据。经过图像处理模块1100后得到的缺陷的尺寸和位置坐标都是以像素数为单位的,因此通过缺陷定标模块1400 建立的球面任意位置处子孔径图像的像素数与实际尺寸的关系,可以获得缺陷的实际长度、宽度以及位置坐标。
长度定标过程就是要获得球面上任意位置处标准线段的实际长度与球面子孔径图像的像素数之间的关系。如图23所示,采用如下方式获取长度定标数据:
首先在平面物面1250上取一条标准线段dl 1420,dl 1420的长度通过标准测量仪器测量。标准线段dl 1420经显微散射暗场成像单元400成像,在成像子孔径图像1210上得到其像dp 1410。
然后将该幅成像子孔径图像1210重构为三维子孔径图像1220,在三维子孔径图像1220上得到标准线段的球面像dc 1430,此时dc以像素数为单位,同时通过dc 1430得到其所对应的圆弧角dθ 1440。由于凸球面光学元件201的曲率半径R能够通过球面定中过程精确测量得到,因此dc 1430所对应的实际尺寸d=Rdθ。通过寻找dc和d之间的对应关系,可定标得到三维子孔径图像1220的像素数与实际尺寸的对应关系,即k=d/dc,若将d=Rdθ代入,可得k=Rdθ/dc,而dc=Rpixeldθ,其中Rpixel为重构后的三维球面图像的曲率半径(以像素数为单位),简称为像素曲率半径,从而得到k=R/Rpixel,由此可见定标系数k的取值随曲率半径R的变化而变化,当曲率半径R变化时,需重新进行定标。
在同一个凸球面光学元件201上的表面缺陷在提取其长度时,首先通过特征提取获得缺陷的各像素点位置坐标,依据像素点位置坐标,将连续的缺陷离散为n条线段,得到线段方程li:yi=kixi+bi,其中i=1,2,3...n。针对各线段分别进行逆投影还原过程,得到线段li在以Rpixel为半径的球面上所对应的弧线Ci,并依据球面积分公式可得到缺陷像素长度:
Figure PCTCN2015089217-appb-000019
其中ds为曲线微元。将定标系数k代入后,可得到缺陷的实际长度
Figure PCTCN2015089217-appb-000020
宽度定标过程就是要获得球面上任意位置处标准线段的实际宽度与三维子孔径图像的像素数之间的关系。显微散射暗场成像单元400的成像倍率为低倍时,视场小,分辨率低,对微米量级的宽度数值难以准确定标。低倍下的宽度定标结果仅可以参考,不能作为评价结果,因此宽度应该在高倍下进行定标和评价。低倍下,宽度定标过程可采取与长度定标过程类似的定标方法。高倍下,由于缺陷的宽度在微米量级,缺陷位于视场的中心。如图24所示,采用如下方式获取宽度定标数据:
首先,在三维坐标系中经过原点的切平面1250上取一条标准线段,其实际宽度1420w由标准测量仪器测量。标准线段经显微散射暗场成像单元400成像,在成像子孔径图像1210上得到其像,像面宽度像素数为1410w。
然后,将该幅成像子孔径图像1210重构为三维子孔径图像1220,在三维子孔径图像1220上得到标准线段的球面像,其沿宽度方向的弧长像素数1430w即为缺陷宽度像素数。
由于采集高倍图像时,特征位于视场的中心,沿成像光轴方向的信息压缩可以忽略,因此缺陷的实际宽度与标准线段的实际宽度1420w相等。
如图25所示,将缺陷的实际宽度与缺陷宽度的像素数的对应关系的离散点1450进行分段拟合,获得最佳的拟合曲线,即为定标传递函数1460。利用定标传递函数1460,就可以计算球面任意的宽度像素数对应的实际宽度,完成宽度定标。
实施例2
下面,本发明的实施例2实施例2将结合图26-30来详细描述。
在实施例2中将描述评价凹球面光学元件时的球面光学元件表面缺陷评价系统及其方法。
本发明实施例2中评价凹球面光学元件的球面光学元件表面缺陷评价系统及其方法与本发明实施例1中评价凸球面光学元件的球面光学元件表面缺陷评价系统及其方法类似。为了避免混淆和重复, 图26-30中与图1-25中相关的部分将用同样的标号。实施例2中讨论的重点也将在于与实施例1不同的部分。
图26所示是与实施例2对应的照明光路图。均匀面光源320发射出平行光,平行光经过球面光源镜组330形成孔径角θl的会聚光束,具体过程如下:首先根据凹球面光学元件1501的曲率半径计算出变焦镜组332在球面光源镜组330中的位置,将变焦镜组332移动到计算出的位置;其次均匀面光源320发出的平行光进入球面光源镜组330,依次通过前固定镜组331、变焦镜组332、后固定镜组333后形成孔径角θl的会聚光束。
图27所示是与图26对应的当入射角γ为40°时凹球面光学元件的曲率半径与球面光源的孔径角θl的关系。可以看出随着曲率半径增大,球面光源孔径角θl减小,其表面接收到的光源照明范围也相应缩小,孔径角θl小于等于12°。比较图27与图5可以看出,球面光源照射凹球面光学元件形成的孔径角比照射相同曲率半径的凸球面光学元件形成的孔径角更小,孔径角随曲率半径的增大而更为急剧的降低,孔径角为0°时对应的临界曲率半径更小。
图28所示是与实施例2对应的球面定中单元600的结构图。凹球面光学元件1501的定中光路与实施例1中凸透镜光学元件201的定中光路类似。根据图片中十字叉丝的位置来判断凹球面光学元件的曲率中心1502和分划板像1710的相对位置,判断过程如下:
图29A所示是与图28对应的分划板像1710a与凹球面光学元件的曲率中心1502的位置有Z向偏差时的光路图。此时,经过球心像的入射光与反射光不重合,因此在CCD 609上会得到模糊的十字叉丝像,如图29B所示。另外,图30A所示是与图28对应的分划板像1710b与凹球面光学元件的曲率中心1502的位置有X和Y向偏差时的光路图。此时,凹球面光学元件的光轴1505与球面定中单元的光轴615不重合,反射光线经过成像镜聚焦在CCD 609上形成对焦清晰但不在视场中心的十字叉丝像,如图30B所示。因此,通过上述分析利用CCD 609上十字叉丝像的不同状态就可以确定凹球面光学元件的曲率中心1502在三维空间中的相对位置。实施例2描述了评价凹球面光 学元件时的球面光学元件表面缺陷评价系统及其方法。评价凹球面光学元件时,缺陷评价方法与实施例1相同,但是由于其面形与凸球面光学元件不同,因此照明单元300与球面定中单元600也有所不同。
实施例3
下面,本发明的实施例3将结合图31-33来详细描述。实施例3中,将描述评价小口径球面光学元件时的球面光学元件表面缺陷评价系统及其方法。同样地,为了避免混淆和重复,图31-33中与图1-25中相关的部分将用同样的标号。实施例3中讨论的重点也将在于与实施例1不同的部分。
本实施例中讨论的小口径球面光学元件1801特点是其口径小于照明单元300的照明孔径以及显微散射暗场成像单元400的物方视场,因此显微散射暗场成像单元400只需要对球面顶点1009(如图15所示)处的一幅子孔径进行成像就可以得到整个小口径球面光学元件表面的全口径成像。如图31-33所示,实施例3中的小口径球面光学元件表面缺陷评价系统及其方法无需扫描路径规划模块,而且图像处理模块2000只需要完成单子孔径的处理。相应地,缺陷评价方法1900也会比实施例1和实施例2简单,具体流程如下所述。
如图31和图32所示的缺陷评价方法1900,该方法的实现中包括球面自动定中模块900、图像处理模块2000和缺陷定标模块1400。
具体包括如下步骤:
步骤1、通过球面自动定中模块900对于球面进行自动定中;
步骤2、通过图像处理模块2000和缺陷定标模块1400对子孔径图像进行处理,得到球面缺陷信息。
步骤2所述的通过图像处理模块2000和缺陷定标模块1400对子孔径图像进行处理,得到球面缺陷信息,参看图33,具体包括如下步骤:
2-1.小口径球面光学元件1801经过显微散射暗场成像单元400成像在像面上时,得到的成像子孔径图像为二维图像,因此首先要进行球面三维重构,矫正小口径球面光学元件1801的表面缺陷经过光学成像时产生的沿成像光轴方向的信息压缩。
2-2.经球面三维重构后得到三维子孔径图像,为了便于特征提取,通过单子孔径投影将三维子孔径图像的信息投影到二维平面上,得到单子孔径投影图像。
2-3.对得到的单子孔径投影图像进行低倍特征提取,然后利用三维逆重构得到缺陷的三维尺寸;最后利用缺陷定标模块1400得到的球面缺陷定标数据,实现缺陷实际尺寸的检测,并获得缺陷在小口径球面光学元件1801表面上的位置坐标。
2-4.对缺陷进行高倍检测,保证微米量级的检测精度。首先调节显微散射暗场成像单元400的成像倍率至高倍;然后依据步骤2-3所得的位置坐标,将表面缺陷移动至高倍视场中心,进行高倍图像采集;再进行高倍特征提取,并利用缺陷定标模块1400得到的球面缺陷定标数据得到微米量级的缺陷评价结果。
2-5.对缺陷评价结果以球面三维预览图、电子报表及缺陷位置示意图的方式进行缺陷评价信息输出。

Claims (20)

  1. 球面光学元件表面缺陷评价系统,包括缺陷成像子系统和控制子系统,缺陷成像子系统用于获取适用于数字图像处理的显微散射暗场图像,控制子系统用于控制缺陷成像子系统内各元器件的运动,实现球面光学元件表面图像的采集;其特征在于缺陷成像子系统包括照明单元、显微散射暗场成像单元、空间位姿调整单元和球面定中单元;照明单元用于为显微散射暗场成像单元成像提供所需要的暗场照明光;显微散射暗场成像单元用于收集元件表面的散射光并成像;空间位姿调整单元用于实现五维的空间位置及姿态调整,不仅能够实现空间三维的平移,还能够实现元件的旋转和摆动,便于对表面不同位置处清晰成像;球面定中单元用于确定球面光学元件的曲率中心的位置;照明单元、显微散射暗场成像单元、空间位姿调整单元和球面定中单元的运动及调整都是在控制子系统的驱动控制下完成。
  2. 如权利要求1所述的球面光学元件表面缺陷评价系统,其特征在于照明单元包括球面光源、光源旋转支架,其中球面光源包括均匀面光源、球面光源镜组,其中球面光源镜组内依次安装有前固定镜组、变焦镜组、后固定镜组,且球面光源镜组的光轴与显微散射暗场成像单元的光轴的夹角为入射角γ,入射角γ的角度范围为25°-45°。
  3. 如权利要求2所述的球面光学元件表面缺陷评价系统,其特征在于光源旋转支架包括顶部固定板、内圈转轴、蜗轮、蜗杆、伺服电机、电机支座、轴承、外圈转动件、光源固定支架;其中球面光源固定在光源固定支架上,光源固定支架固定在外圈转动件上;外圈转动件通过轴承与内圈转轴活动连接;外圈转动件上安装有蜗轮;蜗轮与蜗杆活动连接并在伺服电机驱动下实现圆周转动;伺服电机通过电机支座与内圈转轴一起固定于顶部固定板上,顶部固定板固定在Z轴导轨上,光源旋转支架用于完成球面表面缺陷的全方向照明。
  4. 如权利要求3所述的球面光学元件表面缺陷评价系统,其特征在于三个球面光源通过光源固定支架以120°间隔周向均匀分布在外圈转动件上。
  5. 如权利要求3或4所述的球面光学元件表面缺陷评价系统,其特征在于照明单元的光路形成如下:首先根据球面光学元件的曲率半径计算出变焦镜组在球面光源镜组中的位置,然后将变焦镜组移动到计算出的位置;其次均匀面光源发出的平行光进入球面光源镜组,依次通过前固定镜组、变焦镜组、后固定镜组后形成孔径角θl的会聚光束。
  6. 如权利要求1所述的球面光学元件表面缺陷评价系统,其特征在于所述的显微散射暗场成像单元利用光滑表面缺陷对入射光产生调制而激发出的散射光实现缺陷的显微暗场成像,得到缺陷的暗场图像;其原理如下:入射光线入射到球面光学元件表面,当球面表面光滑时,依据几何光学反射定律,入射光线在球面表面发生反射,形成的反射光线不会进入显微散射暗场成像单元;当球面表面存在表面缺陷时,入射光线会引起散射,产生散射光线并被显微散射暗场成像单元接收,形成缺陷暗场图像。
  7. 如权利要求1所述的球面光学元件表面缺陷评价系统,其特征在于所述的空间位姿调整单元包括X轴导轨、Y轴导轨、Z轴导轨、自旋转台、摆动台和自定心夹持机构;摆动台包括内板和外板;自定心夹持机构与自旋转台上的旋转轴固定连接,自旋转台的底座固定在摆动台的内板上;内板和外板活动连接,使得内板能够相对外板摆动;内板和外板的截面均为U型;摆动台的外板下底面固定在Y轴导轨的工作台面上,Y轴导轨固定在X轴导轨的工作台面上;X轴导轨和Z轴导轨固定在同一平台上。
  8. 如权利要求1所述的球面光学元件表面缺陷评价系统,其特征在于所述的球面定中单元包括光源、光源聚焦镜组、分划板、准直透镜、分束器、物镜、反光镜、成像镜和CCD;光源发出的光经过光源聚焦镜组照射在分划板上,分划板上刻有十字叉丝;随后光线经过准直透镜透射后进入分束器,经过分束器透射后再通过物镜 照射在球面光学元件上,并在其表面发生反射,此时分划板上的十字叉丝所成的像为分划板像;反射光重新经过物镜后进入分束器,并在分束器发生反射;随后反射光通过反光镜反射以及成像镜最终聚焦在CCD上,将分划板上的十字叉丝成像在CCD上。
  9. 如权利要求1所述的球面光学元件表面缺陷评价系统,其特征在于所述的控制子系统包括定中控制模块、照明控制模块、五维移导控制模块和图像采集控制模块;定中控制模块包括定中图像采集单元和四维移导控制单元;定中图像采集单元用于控制球面定中单元中CCD完成十字叉丝图像的采集;四维移导控制单元用于定中过程中控制X轴导轨、Y轴导轨、Z轴导轨的运动和自旋转台的旋转;照明控制模块包括光源旋转控制单元和光源变焦控制单元;光源旋转控制单元控制照明单元中光源旋转支架的旋转;光源变焦控制单元驱动变焦镜组移动,改变出射的会聚光束的孔径角θl;五维移导控制模块用于缺陷检测过程中控制X轴导轨、Y轴导轨、Z轴导轨的运动,自旋转台的旋转和摆动台的摆动;图像采集控制模块包括子孔径图像采集单元和显微镜变倍控制单元;子孔径图像采集单元用于控制显微散射暗场成像单元完成子孔径图像的采集;显微镜变倍控制单元用于改变显微散射暗场成像单元的成像放大倍率。
  10. 如权利要求1所述的球面光学元件表面缺陷评价系统评价方法,其特征在于该方法包括球面自动定中模块、扫描路径规划模块、图像处理模块和缺陷定标模块;球面自动定中模块用于完成球面光学元件的自动定中,实现其曲率半径的精确测量以及自旋转轴和球面光学元件的光轴的一致性调整;扫描路径规划模块用于规划最优球面扫描路径;图像处理模块用于实现高精度的球面表面缺陷检测;缺陷定标模块用于建立球面任意位置处子孔径图像的像素数与实际尺寸的关系,获得缺陷的实际尺寸信息,具体包括如下步骤:
    步骤1、通过球面自动定中模块对球面进行自动定中;
    步骤2、通过扫描路径规划模块规划最优球面扫描路径,完成球面全孔径扫描;
    步骤3、通过图像处理模块和缺陷定标模块对子孔径图像进行处 理,得到球面缺陷信息。
  11. 如权利要求10所述的球面光学元件表面缺陷评价系统评价方法,其特征在于步骤1所述的通过球面自动定中模块对球面进行自动定中,具体包括如下步骤:
    1-1.初始化球面定中单元;
    1-2.将球面光学元件移动至初始位置,所述的初始位置是球面光学元件的光轴与球面定中单元的光轴大体重合的位置;
    1-3.控制Z轴导轨沿Z方向进行扫描,并在扫描的过程中利用图像熵清晰度评价函数找到最清晰的十字叉丝像;
    1-4.判断十字叉丝为表面像还是球心像,具体判断如下:
    微调X轴导轨和Y轴导轨,观察视场中的十字叉丝是否跟随导轨移动,如果跟随导轨移动则得到十字叉丝像与球面光学元件球心重合的像,将其称之为球面光学元件的球心像,并跳转至步骤1-5;反之则得到十字叉丝像与球面光学元件表面重合的像,将其称之为球面光学元件的表面像,跳转至步骤1-9;
    1-5.移动X轴导轨和Y轴导轨使十字叉丝像到视场中心,从而球面光学元件的光轴与球面定中单元的光轴重合;
    1-6.利用光学装调中采用的旋转测量法测量自旋转轴所在的位置,具体如下:
    自定心夹持机构下方安装有自旋转台,使球面光学元件能够进行自旋转动;且每次待自旋转台旋转30°后,CCD采集一幅十字叉丝像,随着自旋角度的不同,十字叉丝像在CCD视场上的位置也不同,大体轨迹为一个圆,其中圆心就是自旋转轴所在的位置;
    1-7.通过最小二乘法最佳圆拟合方法拟合十字叉丝像的中心的运动轨迹,从而得到运动轨迹的圆心;计算每幅十字叉丝像的中心到圆心的距离,完成十字叉丝像最大偏差的计算;
    1-8.对最大偏差进行判断,若最大偏差在容许误差范围内,则完成轴系一致性调整;若最大偏差大于最大容许误差,则说明球面光学元件的光轴与自旋转轴不重合,此时先通过调节自定心夹持机构使得十字叉丝像的中心移动至轨迹圆圆心,然后跳转至步骤1-5;
    1-9.移动Z轴导轨至初始化得出的理论曲率中心位置处,然后控制Z轴导轨沿Z方向进行扫描,并在扫描的过程中找到最清晰的十字叉丝像,然后跳转步骤1-5;同时记录Z轴从表面像移动至球心像的距离,从而得到球面光学元件的曲率半径,即Z轴移动的距离。
  12. 如权利要求10所述的球面光学元件表面缺陷评价系统评价方法,其特征在于步骤2所述的通过扫描路径规划模块规划最优球面扫描路径,完成球面全孔径扫描,具体包括如下步骤:
    2-1.在步骤1轴系一致性调整获得基准位置的基础上利用空间位姿调整单元将球面光学元件移动至显微散射暗场成像单元正下方,利用显微散射暗场成像单元在球面顶点处采集子孔径图像,此处定义球面坐标系XsYsZs,其中球面坐标系原点Os点为球面光学元件的曲率中心位置,且Zs轴经过球面顶点位置;为获得覆盖被测表面全表面的采样,需要绕Xs轴摆动与绕Zs轴转动的两维运动的组合运动,运动时采用经纬线扫描轨迹;
    2-2.球面光学元件通过绕Xs轴摆动β1角度,在经线上采集得到子孔径图像;然后绕Zs轴自旋转动α1角度,在纬线上采集得到子孔径图像;
    2-3.每次待绕Zs轴自旋转动相同的α1角度后,在纬线上采集子孔径图像,从而得到多幅子孔径图像;
    2-4.在纬线上子孔径图像采集完成后,球面光学元件再次绕Xs轴摆动β2角度,在经线上采集得到子孔径图像;
    2-5.每次待绕Zs轴自旋转动相同的α2角度后,在纬线上采集子孔径图像,得到多幅子孔径图像。球面光学元件再次绕Xs轴摆动β2角度采集下一个纬线层上的多幅子孔径图像,以此类推完成覆盖全表面的采样过程。
  13. 如权利要求10所述的球面光学元件表面缺陷评价系统评价方法,步骤2所述的通过扫描路径规划模块规划最优球面扫描路径,完成球面全孔径扫描,其特征在于首先建立球面子孔径规划模型,在该模型中,子孔径图像A和子孔径图像B分别为在经线C上采集到的两幅相邻子孔径图像,子孔径图像Aa为子孔径图像A所在的纬 线D1上采集到的相邻子孔径图像,而子孔径图像Bb为子孔径图像B所在的纬线D2上采集到的相邻子孔径图像;同时,记子孔径图像Aa与子孔径图像A在底部的交点为Pcd,子孔径图像B与子孔径图像Bb在顶部的交点为Pcu;则实现子孔径无漏检测的充分条件为圆弧长
    Figure PCTCN2015089217-appb-100001
    小于等于
    Figure PCTCN2015089217-appb-100002
    在此约束条件下,建立摆动角度β1、β2和自旋角度α1、α2之间的对应关系,并求解得到规划结果,所述的摆动角度β1、β2和自旋角度α1、α2大小的求解如下:
    ①确认球面光学元件的相关参数,该参数包括球面光学元件的曲率半径、口径、显微散射暗场成像单元的物方视场大小;
    ②依据上述三个参数给定初始摆动角度β1和初始摆动角度β2,之后依据纬线上相邻子孔径图像重叠范围一致计算自旋角度α1和α2;然后计算弧长
    Figure PCTCN2015089217-appb-100003
    Figure PCTCN2015089217-appb-100004
    ③比较
    Figure PCTCN2015089217-appb-100005
    Figure PCTCN2015089217-appb-100006
    的大小以确定β2的取值是否合适,若
    Figure PCTCN2015089217-appb-100007
    则按5%的比例减小β2的值,然后跳转至步骤②,若
    Figure PCTCN2015089217-appb-100008
    则完成覆盖全表面的子孔径规划。
  14. 如权利要求10所述的球面光学元件表面缺陷评价系统评价方法,其特征在于步骤3所述的通过图像处理模块和缺陷定标模块对子孔径图像进行处理,得到球面缺陷信息,具体包括如下步骤:
    3-1.球面光学元件经过显微散射暗场成像单元成像在像面上时,得到的成像子孔径图像为二维图像;由于在光学成像过程中会发生沿成像光轴方向的信息压缩,因此首先要进行球面三维重构,矫正球面光学元件的表面缺陷经过光学成像时产生的沿成像光轴方向的信息压缩,所述的球面三维重构过程,是指将显微散射暗场成像单元的成像过程,简化为小孔成像模型,再利用几何关系将成像子孔径图像重构为三维子孔径图像;
    3-2.经球面三维重构后得到三维子孔径图像,为了便于特征提取,通过全口径投影将三维子孔径图像的信息投影到二维平面上,得到全口径投影图像;
    3-3.对得到的全口径投影图像进行低倍特征提取,然后利用三维逆重构得到缺陷的三维尺寸;最后利用缺陷定标模块得到的球面 缺陷定标数据,实现缺陷实际尺寸的检测,并获得缺陷在球面光学元件表面上的位置坐标;
    3-4.对缺陷进行高倍检测,保证微米量级的检测精度;首先调节显微散射暗场成像单元的成像倍率至高倍;然后依据步骤3-3所得的位置坐标,将表面缺陷移动至高倍视场中心,进行高倍图像采集;再进行高倍特征提取,并利用缺陷定标模块得到的球面缺陷定标数据得到微米量级的缺陷评价结果;
    3-5.对缺陷评价结果以球面三维预览图、电子报表及缺陷位置示意图的方式进行缺陷评价信息输出。
  15. 如权利要求14所述的球面光学元件表面缺陷评价系统评价方法,其特征在于步骤3-1所述的球面光学元件经过显微散射暗场成像单元成像在像面上得到成像子孔径图像的过程如下,具体如下:
    3-1-1.球面光学元件表面上一点p在空间位姿调整单元带动下,依据步骤2所述的扫描路径规划模块规划的最优球面扫描路径,运动至p′点处;
    3-1-2.在显微暗场散射成像单元的放大倍率为低倍率时对子孔径图像进行采集;p′经过显微散射暗场成像单元成像后得到在成像子孔径上的像点为p″;
    3-1-3.通过数字图像采集过程,将像面坐标系XcYc转换为图像坐标系XiYi,得到成像子孔径图像,Xc轴和Yc轴组成像面坐标系XcYc,其坐标原点为显微散射暗场成像单元的光轴与成像子孔径图像的交点;Xi轴与Yi轴组成图像坐标系XiYi,其坐标原点Oi点为采集得到的数字图像左上角点。
  16. 如权利要求14所述的球面光学元件表面缺陷评价系统评价方法,其特征在于步骤3-2所述的全口径投影图像的获取过程,具体包括如下步骤:
    3-2-1.对重构后的三维子孔径图像进行球面子孔径全局坐标变换,将三维子孔径图像经过全局坐标变换转换为球面子孔径图像;
    3-2-2.将球面子孔径图像垂直投影至平面得到投影子孔径图像;
    3-2-3.通过球面子孔径图像垂直投影至平面上得到投影子孔径图像,然后进行投影子孔径图像拼接,在平面上得到上述缺陷的位置和尺寸信息后再对其进行逆重构,从而实现球面光学元件表面缺陷的准确检测。
  17. 如权利要求16所述的球面光学元件表面缺陷评价系统评价方法,其特征在于投影子孔径图像拼接采用纬线层直接拼接,经线层环形拼接的方式;投影子孔径图像拼接过程如下:
    ①对投影子孔径图像去噪,从而去除背景噪声对投影拼接精度的影响;
    ②对去噪后的同一纬线层上相邻投影子孔径图像的重叠区域进行特征配准;
    ③对同一纬线层上配准后的相邻子孔径图像进行拼接,得到纬线层环带图像;
    ④提取纬线层环带图像中包含所有重叠区的最小圆环;
    ⑤提取最小圆环的配准点,获取最佳匹配位置,完成投影子孔径拼接过程。
  18. 如权利要求14所述的球面光学元件表面缺陷评价系统评价方法,其特征在于步骤3-3所述的对得到的全口径投影图像进行低倍特征提取,然后利用缺陷定标模块得到球面缺陷定标数据,实现缺陷实际尺寸的检测;最后通过三维逆重构得到缺陷的真实尺寸,并获得缺陷在球面光学元件表面上的位置坐标,具体如下:
    3-3-1.在投影子孔径图像拼接后的二维全孔径图像上,进行缺陷的特征提取,获取缺陷的尺寸和位置信息;
    3-3-2.经过缺陷三维逆投影到球面光学元件表面缺陷的三维尺寸和位置坐标的像素数;
    3-3-3.利用缺陷定标模块得到的球面缺陷定标数据,将缺陷的三维尺寸和位置坐标的像素数转化为实际尺寸和位置坐标。
  19. 如权利要求14所述的球面光学元件表面缺陷评价系统评价方法,其特征在于步骤3-3和3-4中所述的球面缺陷定标数据包括缺陷长度定标数据和缺陷宽度定标数据;长度定标过程就是要获得 球面上任意位置处标准线段的实际长度与球面子孔径图像的像素数之间的关系,长度定标数据获取方式如下:
    首先在平面物面上取一条标准线段dl,dl的长度通过标准测量仪器测量;标准线段dl经显微散射暗场成像单元成像,在成像子孔径图像上得到其像dp
    然后将该幅成像子孔径图像重构为三维子孔径图像,在三维子孔径图像上得到标准线段的球面像dc,此时dc以像素数为单位,同时通过dc得到其所对应的圆弧角dθ;由于球面光学元件的曲率半径R能够通过球面定中过程精确测量得到,因此dc所对应的实际尺寸d=Rdθ;通过寻找dc和d之间的对应关系,定标得到三维子孔径图像的像素数与实际尺寸的对应关系,即k=d/dc,将d=Rdθ代入,得k=Rdθ/dc,而dc=Rpixeldθ,其中Rpixel为重构后的三维球面图像的曲率半径,简称为像素曲率半径,从而得到定标系数k=R/Rpixel;在同一个球面光学元件上的表面缺陷在提取其长度时,首先通过特征提取获得缺陷的各像素点位置坐标,依据像素点位置坐标,将连续的缺陷离散为n条线段,得到线段方程li:yi=kixi+bi,其中i=1,2,3...n;针对各线段分别进行逆投影还原过程,得到线段li在以Rpixel为半径的球面上所对应的弧线Ci,并依据球面积分公式得到缺陷像素长度:
    Figure PCTCN2015089217-appb-100009
    其中ds为曲线微元;将定标系数k代入后,得到缺陷的实际长度
    Figure PCTCN2015089217-appb-100010
  20. 如权利要求19所述的球面光学元件表面缺陷评价系统评价方法,其特征在于宽度定标数据获取方式如下:
    首先,在三维坐标系中经过原点的切平面上取一条标准线段,其实际宽度由标准测量仪器测量;标准线段经显微散射暗场成像单元成像,在成像子孔径图像上得到其像;
    然后,将该幅成像子孔径图像重构为三维子孔径图像,在三维子孔径图像上得到标准线段的球面像,其沿宽度方向的弧长像素数 即为缺陷宽度像素数;由于采集高倍图像时,特征位于视场的中心,沿成像光轴方向的信息压缩可以忽略,因此缺陷的实际宽度与标准线段的实际宽度相等;
    将缺陷的实际宽度与缺陷宽度的像素数的对应关系的离散点进行分段拟合,获得最佳的拟合曲线,即为定标传递函数;利用定标传递函数计算球面任意的宽度像素数对应的实际宽度。
PCT/CN2015/089217 2014-09-18 2015-09-09 球面光学元件表面缺陷评价系统及其方法 WO2016041456A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/509,159 US10444160B2 (en) 2014-09-18 2015-09-09 Surface defects evaluation system and method for spherical optical components

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201410479580.7 2014-09-18
CN201410479580.7A CN104215646B (zh) 2014-09-18 2014-09-18 大口径球面光学元件表面疵病检测系统及其方法
CN201510536104.9A CN105157617B (zh) 2015-08-27 2015-08-27 应用于球面光学元件表面缺陷检测的球面自动定中方法
CN201510535230.2 2015-08-27
CN201510535230.2A CN105092607B (zh) 2015-08-27 2015-08-27 球面光学元件表面缺陷评价方法
CN201510536104.9 2015-08-27

Publications (1)

Publication Number Publication Date
WO2016041456A1 true WO2016041456A1 (zh) 2016-03-24

Family

ID=55532549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/089217 WO2016041456A1 (zh) 2014-09-18 2015-09-09 球面光学元件表面缺陷评价系统及其方法

Country Status (2)

Country Link
US (1) US10444160B2 (zh)
WO (1) WO2016041456A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978995A (zh) * 2019-03-28 2019-07-05 湘潭大学 一种含随机非规则多面体孔洞缺陷的脆性材料生成方法
CN110967693A (zh) * 2019-11-06 2020-04-07 西安电子科技大学 一种稳健高效的快速分解投影自动聚焦方法及系统
CN112069451A (zh) * 2020-09-01 2020-12-11 北京理工大学 一种预测平头弹正冲击下球壳变形和贯穿破坏行为的方法
CN112229854A (zh) * 2020-09-03 2021-01-15 中国科学院上海光学精密机械研究所 一种球面光学元件表面缺陷测量装置和测量方法
CN117232790A (zh) * 2023-11-07 2023-12-15 中国科学院长春光学精密机械与物理研究所 基于二维散射实现光学元件表面缺陷的评估方法及系统
CN117710375A (zh) * 2024-02-05 2024-03-15 常州市南方电机有限公司 一种电机转子与定子的对中性检测方法及系统

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857244B (zh) * 2017-11-30 2023-09-01 百度在线网络技术(北京)有限公司 一种手势识别方法、装置、终端设备、存储介质及vr眼镜
JP6512585B1 (ja) * 2017-12-01 2019-05-15 株式会社アセット・ウィッツ 部品外観自動検査装置
EP3773148A2 (en) * 2018-04-11 2021-02-17 Alcon Inc. Automatic xy centering for digital microscope
US10769851B1 (en) * 2018-04-29 2020-09-08 Dustin Kyle Nolen Method for producing a scaled-up solid model of microscopic features of a surface
US11126160B1 (en) * 2018-04-29 2021-09-21 Dustin Kyle Nolen Method for producing a scaled-up solid model of microscopic features of a surface
CN109490313B (zh) * 2018-11-09 2022-08-23 中国科学院光电技术研究所 一种大口径曲面光学元件表面缺陷自动检测装置及方法
CN110006905B (zh) * 2019-01-25 2023-09-15 杭州晶耐科光电技术有限公司 一种线面阵相机结合的大口径超净光滑表面缺陷检测装置
CN111610639A (zh) * 2019-02-26 2020-09-01 弗提图德萨沃有限公司 一种光学镜片装配装置及光机模组的装配方法
CN110426326B (zh) * 2019-08-08 2021-08-03 杭州晶耐科光电技术有限公司 检测与区分光滑表面和亚表面颗粒的激光偏振装置和方法
DE102020107965B3 (de) * 2020-03-23 2021-09-09 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein Verfahren zur optischen Bestimmung einer Intensitätsverteilung
CN111899215B (zh) * 2020-06-15 2023-04-25 浙江大学 一种光学元件体缺陷的提取方法
CN111896628B (zh) * 2020-06-30 2023-08-04 洛阳轴承研究所有限公司 一种氮化硅陶瓷球超声波无损检测方法
CN113687628B (zh) * 2021-08-02 2022-11-18 大连理工大学 一种多源几何约束下孔特征自适应加工方法
CN113483668B (zh) * 2021-08-19 2023-03-17 广东亚太新材料科技有限公司 一种碳纤维复合材料产品尺寸检测方法及系统
CN113884505B (zh) * 2021-09-01 2024-04-12 中国科学院上海光学精密机械研究所 球面元件表面缺陷散射探测装置和测量方法
CN113984790A (zh) * 2021-09-28 2022-01-28 歌尔光学科技有限公司 镜片质量检测方法及装置
CN114113150B (zh) * 2021-11-05 2023-10-20 浙江大学 一种小口径球面透镜表面缺陷检测装置和检测方法
CN114199905B (zh) * 2021-12-13 2024-02-20 中国航发南方工业有限公司 一种机匣内部缺陷的空间定位方法及系统
CN115032945B (zh) * 2022-04-28 2023-04-11 大连理工大学 复杂曲面零件慢刀伺服磨削加工刀具轨迹规划方法
US20230408421A1 (en) * 2022-06-16 2023-12-21 Schaeffler Technologies AG & Co. KG Method for defect detection for rolling elements
CN115326804B (zh) * 2022-09-02 2024-05-14 哈尔滨工业大学 一种熔石英元件表面损伤发起与损伤增长自动评价装置与方法
CN116363302B (zh) * 2023-03-06 2024-05-28 郑州大学 一种基于多视角几何的管道三维重建和坑洞量化方法
CN116096066B (zh) * 2023-04-12 2023-06-16 四川易景智能终端有限公司 一种基于物联网的smt贴片质量检测系统
CN116359233B (zh) * 2023-06-02 2023-10-03 东声(苏州)智能科技有限公司 方形电池外观缺陷检测方法、装置、存储介质和电子设备
CN116358842B (zh) * 2023-06-02 2023-08-01 中国科学院长春光学精密机械与物理研究所 基于机械臂的大口径光学元件表面缺陷检测方法及装置
CN116428985B (zh) * 2023-06-13 2023-08-29 江苏京创先进电子科技有限公司 边缘坐标获取方法、晶圆对准识别方法及晶圆环切方法
CN116577931B (zh) * 2023-07-14 2023-09-22 中国科学院长春光学精密机械与物理研究所 基于仪器传递函数的光学元件拼接检测方法
CN116628830B (zh) * 2023-07-24 2023-09-22 中建安装集团西安建设投资有限公司 基于bim技术的定位放线方法及系统
CN116977403B (zh) * 2023-09-20 2023-12-22 山东科技大学 一种基于双目视觉的薄膜生产幅宽检测和控制方法
CN117110319B (zh) * 2023-10-23 2024-01-26 汇鼎智联装备科技(江苏)有限公司 基于3d成像的球体表面缺陷检测方法和检测系统
CN117745716B (zh) * 2024-02-07 2024-04-16 湖南仁盈科技有限公司 一种pcba板缺陷的可视化方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010008303A1 (en) * 2008-07-15 2010-01-21 Anzpac Systems Limited Improved method and apparatus for article inspection
WO2010113232A1 (ja) * 2009-03-31 2010-10-07 株式会社 日立ハイテクノロジーズ 検査方法及び検査装置
CN202229084U (zh) * 2011-08-30 2012-05-23 成都四星液压制造有限公司 一种用于磁瓦在线检测的光源自动调整装置
CN103293162A (zh) * 2013-06-17 2013-09-11 浙江大学 用于球面光学元件表面疵病暗场检测的照明系统及方法
JP2013190252A (ja) * 2012-03-13 2013-09-26 Hitachi High-Technologies Corp 欠陥検査方法及びその装置
CN104215646A (zh) * 2014-09-18 2014-12-17 浙江大学 大口径球面光学元件表面疵病检测系统及其方法
CN204128987U (zh) * 2014-09-18 2015-01-28 浙江大学 大口径球面光学元件表面疵病检测系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010008303A1 (en) * 2008-07-15 2010-01-21 Anzpac Systems Limited Improved method and apparatus for article inspection
WO2010113232A1 (ja) * 2009-03-31 2010-10-07 株式会社 日立ハイテクノロジーズ 検査方法及び検査装置
CN202229084U (zh) * 2011-08-30 2012-05-23 成都四星液压制造有限公司 一种用于磁瓦在线检测的光源自动调整装置
JP2013190252A (ja) * 2012-03-13 2013-09-26 Hitachi High-Technologies Corp 欠陥検査方法及びその装置
CN103293162A (zh) * 2013-06-17 2013-09-11 浙江大学 用于球面光学元件表面疵病暗场检测的照明系统及方法
CN104215646A (zh) * 2014-09-18 2014-12-17 浙江大学 大口径球面光学元件表面疵病检测系统及其方法
CN204128987U (zh) * 2014-09-18 2015-01-28 浙江大学 大口径球面光学元件表面疵病检测系统

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978995A (zh) * 2019-03-28 2019-07-05 湘潭大学 一种含随机非规则多面体孔洞缺陷的脆性材料生成方法
CN110967693A (zh) * 2019-11-06 2020-04-07 西安电子科技大学 一种稳健高效的快速分解投影自动聚焦方法及系统
CN110967693B (zh) * 2019-11-06 2023-07-07 西安电子科技大学 一种稳健高效的快速分解投影自动聚焦方法及系统
CN112069451A (zh) * 2020-09-01 2020-12-11 北京理工大学 一种预测平头弹正冲击下球壳变形和贯穿破坏行为的方法
CN112069451B (zh) * 2020-09-01 2022-11-15 北京理工大学 一种预测平头弹正冲击下球壳变形和贯穿破坏行为的方法
CN112229854A (zh) * 2020-09-03 2021-01-15 中国科学院上海光学精密机械研究所 一种球面光学元件表面缺陷测量装置和测量方法
CN112229854B (zh) * 2020-09-03 2022-10-11 中国科学院上海光学精密机械研究所 一种球面光学元件表面缺陷测量装置和测量方法
CN117232790A (zh) * 2023-11-07 2023-12-15 中国科学院长春光学精密机械与物理研究所 基于二维散射实现光学元件表面缺陷的评估方法及系统
CN117232790B (zh) * 2023-11-07 2024-02-02 中国科学院长春光学精密机械与物理研究所 基于二维散射实现光学元件表面缺陷的评估方法及系统
CN117710375A (zh) * 2024-02-05 2024-03-15 常州市南方电机有限公司 一种电机转子与定子的对中性检测方法及系统
CN117710375B (zh) * 2024-02-05 2024-04-12 常州市南方电机有限公司 一种电机转子与定子的对中性检测方法及系统

Also Published As

Publication number Publication date
US20170292916A1 (en) 2017-10-12
US10444160B2 (en) 2019-10-15

Similar Documents

Publication Publication Date Title
WO2016041456A1 (zh) 球面光学元件表面缺陷评价系统及其方法
CN107356608B (zh) 大口径熔石英光学元件表面微缺陷快速暗场检测方法
US6580518B2 (en) Confocal microscope and height measurement method using the same
US7982950B2 (en) Measuring system for structures on a substrate for semiconductor manufacture
JP5599314B2 (ja) 試料の検査のための方法および光学装置
JP4774332B2 (ja) 偏芯量測定方法
CN103293162B (zh) 用于球面光学元件表面疵病暗场检测的照明系统及方法
TWI292033B (zh)
CN103411557B (zh) 阵列照明的角谱扫描准共焦环形微结构测量装置与方法
US20080021665A1 (en) Focusing method and apparatus
JP2008298739A (ja) 偏芯量測定装置
US20150276622A1 (en) Method and device for detecting defects and method and device for observing defects
CN110411346A (zh) 一种非球面熔石英元件表面微缺陷快速定位方法
CN112229854B (zh) 一种球面光学元件表面缺陷测量装置和测量方法
CN114486910B (zh) 一种平面光学元件表面疵病检测装置和检测方法
CN116907380A (zh) 基于图像信息的点衍射干涉仪被测镜精确对准方法及系统
CN106841236B (zh) 透射光学元件疵病测试装置及方法
CN103411559B (zh) 基于阵列照明的角谱扫描准共焦微结构测量方法
CN108845406A (zh) 多倍率全自动显微成像方法及装置
WO2021148465A1 (en) Method for outputting a focused image through a microscope
CN112595497A (zh) 一种基于机器视觉的数字化刀口仪检验方法及系统
US11047675B2 (en) Method and apparatus for inspection of spherical surfaces
CN109945803B (zh) 横向相减激光差动共焦柱面曲率半径测量方法
TWI764801B (zh) 用於量測穿通孔之幾何參數的方法及系統
CN110197508A (zh) 2d、3d共融视觉引导运动的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15841621

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15509159

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15841621

Country of ref document: EP

Kind code of ref document: A1