CN116964425A - Wavefront aberration deriving method, machine learning model generating method, lens optical system manufacturing method, wavefront aberration deriving device, and wavefront aberration deriving program - Google Patents

Wavefront aberration deriving method, machine learning model generating method, lens optical system manufacturing method, wavefront aberration deriving device, and wavefront aberration deriving program Download PDF

Info

Publication number
CN116964425A
CN116964425A CN202180095463.9A CN202180095463A CN116964425A CN 116964425 A CN116964425 A CN 116964425A CN 202180095463 A CN202180095463 A CN 202180095463A CN 116964425 A CN116964425 A CN 116964425A
Authority
CN
China
Prior art keywords
wavefront aberration
machine learning
learning model
resolution performance
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180095463.9A
Other languages
Chinese (zh)
Inventor
早川洋平
西谷守
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of CN116964425A publication Critical patent/CN116964425A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Lenses (AREA)

Abstract

The invention provides a wavefront aberration deriving method, a device and a program capable of simply deriving a wavefront aberration, a method for generating a machine learning model used in the wavefront aberration deriving method, and a method for manufacturing a lens optical system by using the wavefront aberration deriving method. The processor (51) executes: step 1, acquiring resolution performance information of a lens device (1); and step 2, inputting the resolution performance information into a machine learning model (52), and obtaining the wavefront aberration of the lens device (1) from the machine learning model (52).

Description

Wavefront aberration deriving method, machine learning model generating method, lens optical system manufacturing method, wavefront aberration deriving device, and wavefront aberration deriving program
Technical Field
The present invention relates to a wavefront aberration derivation method, a machine learning model generation method, a lens optical system manufacturing method, a wavefront aberration derivation device, and a wavefront aberration derivation program.
Background
Patent document 1 describes the following method: the CTF (Contrast Transfer Function: contrast transfer function) is obtained as a performance value of the lens optical system, the performance value is input to a neural network, and the movement adjustment amount of the lens to be adjusted included in the lens optical system is obtained by the neural network.
Patent document 2 describes the following method: the target design is analyzed and optimized with respect to the relationship of the zernike sensitivity of pattern placement errors between the target candidates and the device design.
Patent document 3 describes a method for determining imaging quality of an optical imaging system including an illumination system including a light source, a sample holder for accommodating a sample, an imaging optical system, and at least one dot resolution detection device as a main configuration group.
Patent document 4 describes an image analysis device provided with: an image acquisition unit that acquires an image of reflected light of a measurement pattern projected onto an eye to be inspected; a ring image detection unit that detects a ring image based on reflected light from within the captured image acquired by the image acquisition unit; and a higher-order aberration detection unit that detects, from the ring image detected by the ring image detection unit, a specific higher-order aberration of the eye to be inspected, which causes distortion of the ring image.
Patent document 5 describes a method for objectively obtaining a refractive value of an eye of a patient, which includes the steps of: obtaining, with the sensor, at least diagnostic measurement data of the patient's eye representative of a fourth order Zernike wavefront aberration; fitting a second order only polynomial to the wavefront data describing the wavefront with aberrations; and calculating dominant refraction values according to second-order polynomial fitting.
Patent document 6 describes a method of calculating wavefront aberration from an interference fringe image measured by an interferometer.
Patent document 7 describes a measurement method comprising the steps of: measuring a transmitted wavefront of the transmitted light transmitted through the optical system under test; acquiring each aberration component of wavefront aberration of the optical system to be inspected from a transmitted wavefront of the transmitted light; acquiring sensitivity of each aberration component of the wavefront aberration that changes when the position of the optical element of the optical system to be inspected is shifted; acquiring coefficient information for improving correlation between an evaluation value related to imaging performance of the optical system to be inspected and each aberration component of the wavefront aberration; and determining a positional shift amount of an optical element of the optical system to be inspected using each aberration component of the wavefront aberration, the sensitivity of each aberration component, and the coefficient information.
Technical literature of the prior art
Patent literature
Patent document 1: japanese patent laid-open No. 2008-170981
Patent document 2: japanese patent application laid-open No. 2018-514807
Patent document 3: japanese patent application laid-open No. 2006-527353
Patent document 4: japanese patent laid-open No. 2020-081450
Patent document 5: japanese patent application laid-open No. 2004-528130
Patent document 6: japanese patent laid-open No. 6-235619
Patent document 7: japanese patent application laid-open No. 2019-191121
Disclosure of Invention
Technical problem to be solved by the invention
The measurement of wavefront aberrations typically requires a lot of equipment or time, and thus its measurement work is not easy. None of the above conventional techniques describes a method for simply deriving wavefront aberration.
The present invention provides a wavefront aberration deriving method, a device, and a program capable of easily deriving a wavefront aberration, a method for generating a machine learning model used in the wavefront aberration deriving method, and a method for manufacturing a lens optical system using the wavefront aberration deriving method.
Means for solving the technical problems
A wavefront aberration deriving method according to an aspect of the present invention is a wavefront aberration deriving method for deriving a wavefront aberration of a lens optical system, including: step 1, obtaining resolution performance information of the lens optical system; and step 2, inputting the resolution performance information into a machine learning model, and obtaining the wavefront aberration of the lens optical system from the machine learning model.
In the method for generating a machine learning model according to one aspect of the present invention, a 1 st simulation result of resolution performance information of a plurality of virtual optical systems obtained by giving an assumed error of a plurality of patterns to an optical element of a lens optical system is obtained, a 2 nd simulation result of wavefront aberration of the plurality of virtual optical systems is obtained, and a machine learning model is generated based on the 1 st simulation result and a correspondence between the 2 nd simulation result machine learning resolution performance information and the wavefront aberration.
A method for manufacturing a lens optical system according to an aspect of the present invention is a method for manufacturing a lens optical system having a plurality of optical elements including an optical element to be adjusted, including the steps of: acquiring resolution performance information of the lens optical system; inputting the resolution performance information into a machine learning model, and obtaining wavefront aberration of the lens optical system from the machine learning model; and setting a position of the adjusted optical element based on the wavefront aberration.
A wavefront aberration deriving device according to an aspect of the present invention is a wavefront aberration deriving device for deriving a wavefront aberration of a lens optical system, and includes a processor for performing: the method includes the steps of acquiring resolution performance information of the lens optical system, inputting the resolution performance information into a machine learning model, and obtaining wavefront aberration of the lens optical system from the machine learning model.
A wavefront aberration derivation program according to an aspect of the present invention is a wavefront aberration derivation program for deriving a wavefront aberration of a lens optical system, the program causing a computer to execute: step 1, obtaining resolution performance information of the lens optical system; and step 2, inputting the resolution performance information into a machine learning model, and obtaining the wavefront aberration of the lens optical system from the machine learning model.
Effects of the invention
According to the present invention, wavefront aberration can be simply derived.
Drawings
Fig. 1 is a schematic diagram showing an example of a lens apparatus 1 manufactured by a computer 5 as an embodiment of a wavefront aberration deriving apparatus according to the present invention.
Fig. 2 is a schematic diagram showing a schematic configuration of the position adjustment system 100 incorporated in the manufacturing system of the lens apparatus 1.
Fig. 3 is a schematic view of the resolution chart 2 shown in fig. 2 as viewed in the optical axis direction Z from the lens apparatus 1 side.
Fig. 4 is a schematic diagram showing an example of information acquired by the processor 51 shown in fig. 2.
Fig. 5 is a schematic diagram showing an example of the MTF data group Gpx1 shown in fig. 4.
Fig. 6 is a schematic diagram showing an example of teacher data used for generating the machine learning model 52.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
Fig. 1 is a schematic diagram showing an example of a lens apparatus 1 manufactured by a computer 5 as an embodiment of a wavefront aberration deriving apparatus according to the present invention. The lens device 1 is used for an imaging device such as a digital camera, a projection device such as a projector, and the like, and constitutes a lens optical system. Hereinafter, the lens apparatus 1 will be described with reference to an imaging apparatus.
The lens apparatus 1 includes a plurality of optical elements (4 optical elements, i.e., a first lens 11, a second lens 12, a third lens 13, and a fourth lens 14 in the example of fig. 1) provided in a lens barrel 10. The first lens 11, the second lens 12, the third lens 13, and the fourth lens 14 are arranged in this order from the object side along the optical axis K of the lens apparatus 1. The direction in which the optical axis K extends is referred to as an optical axis direction Z. The second lens 12 constitutes an adjusted optical element.
Among the first lens 11, the second lens 12, the third lens 13, and the fourth lens 14, for example, the fourth lens 14 is a focusing lens. By the movement of the fourth lens 14 in the optical axis direction Z, the imaging position of the object based on the lens apparatus 1 can be changed.
The lens device 1 is manufactured, for example, as follows. The first lens 11, the second lens 12, the third lens 13, and the fourth lens 14 are arranged in the lens barrel 10 according to design values, and the first lens 11, the third lens 13, and the fourth lens 14 are fixed in the lens barrel 10. The second lens 12 is temporarily fixed to the lens barrel 10 so as to be able to perform positional adjustment (including adjustment of the inclination angle of the optical axis and the position in the optical axis direction). The lens apparatus 1 in which the first lens 11, the third lens 13, and the fourth lens 14 are fixed to the lens barrel 10 is hereinafter also referred to as a lens apparatus 1 before adjustment in a state in which the second lens 12 is temporarily fixed to the lens barrel 10.
When the lens device 1 before adjustment is completed, the image pickup device 3 described later picks up an image of the object (resolution fig. 2 described later) by the lens device 1 before adjustment. Specifically, the resolution map 2 is captured by the image pickup device 3 at each of the moving positions while moving the fourth lens 14 in the optical axis direction Z within a predetermined range. In this way, the imaging position of the resolution map 2 imaged by the lens device 1 before adjustment and the distance (in other words, defocus amount) of the light receiving surface of the image pickup element 3 are changed to a plurality of values, and a plurality of captured images of the resolution map 2 are obtained. The wavefront aberration of the lens device 1 before adjustment is derived from the plurality of captured images and a machine learning model 52 described later. Then, the position of the temporarily fixed second lens 12 is set so that the derived wavefront aberration becomes a desired state, and the second lens 12 is adjusted so that the set position is reached. After the adjustment is completed, the second lens 12 is formally fixed in the lens barrel 10, thereby completing the lens apparatus 1.
Fig. 2 is a schematic diagram showing a schematic configuration of the position adjustment system 100 incorporated in the manufacturing system of the lens apparatus 1. The position adjustment system 100 includes a resolution map 2, a lens device 1 before adjustment, an imaging element 3 such as a CMOS (Complementary Metal Oxide Semiconductor: complementary metal oxide semiconductor) image sensor or a CCD (Charge Coupled Device: charge coupled device) image sensor, a focus driving device 4 that drives a focus lens (fourth lens 14) included in the lens device 1, a computer 5 that forms a wavefront aberration deriving device, and an adjustment device 6 that adjusts the position of a second lens 12 included in the lens device 1 before adjustment.
Fig. 3 is a view of the resolution chart 2 shown in fig. 2 as viewed in the optical axis direction Z from the lens apparatus 1 side. The resolution fig. 2 is a rectangular plane perpendicular to the optical axis direction Z, and the direction X coincides with the long side direction of the light receiving surface of the image pickup element 3, and the direction Y perpendicular to the direction X coincides with the short side direction of the light receiving surface of the image pickup element 3. The directions X and Y are perpendicular to the optical axis direction Z.
In the resolution fig. 2, a first pattern 21 is formed in a center portion intersecting the optical axis K of the lens device 1 before adjustment. As shown in the enlarged view in fig. 3, the first pattern 21 includes a pattern PY1 in which a plurality of lines extending in the direction Y are arranged in the direction X and a pattern PX1 in which a plurality of lines extending in the direction X are arranged in the direction Y.
In the resolution chart 2, the second pattern 22 is formed at a predetermined position on the circumference of the virtual circle 24 centered on the point intersecting the optical axis K of the lens device 1 before adjustment. A plurality of second patterns 22 may be formed on the circumference of the virtual circle 24. As shown in the enlarged view of fig. 3, the second pattern 22 includes a pattern PY2 in which a plurality of lines extending in the direction Y are arranged in the direction X and a pattern PX2 in which a plurality of lines extending in the direction X are arranged in the direction Y.
In the present specification, in an image captured by the imaging element 3 in the resolution of fig. 2, the center position intersecting the optical axis K is defined as the image height of the reference, and the position on the circumference of a circle centered on the center position is defined as the image height of a value corresponding to the radius of the circle. Therefore, the second pattern 22 located on the circumference of the virtual circle 24 in the captured image of the resolution chart 2 shown in fig. 3 and the first pattern 21 located at the center of the virtual circle 24 can be said to be patterns of different image heights.
The lens device 1 before adjustment is disposed between the resolution map 2 and the image pickup device 3. The imaging device 3 and the focus drive device 4 are incorporated in, for example, an inspection digital camera body to which the lens device 1 can be attached, and the digital camera body is connected to the computer 5.
The computer 5 includes a processor 51, a machine learning model 52, and memories such as RAM (Ramdom Access Memory: random access Memory) and ROM (Read Only Memory), which are not shown. The machine learning model 52 is stored in the ROM described above, for example. The machine learning model 52 may be stored in an external device provided outside the computer 5, and in this case, the processor 51 may be configured to be able to access the external device.
The processor 51 is a general-purpose processor for executing a program to perform various processes, such as a CPU (Central Processing Unit: central processing unit), an FPGA (Field Programmable Gate Array: field programmable gate array), and a processor that can change a circuit configuration after manufacture, such as a programmable logic device (Programmable Logic Device: PLD) or an ASIC (Application Specific Integrated Circuit: application specific integrated circuit), and has a circuit configuration specifically designed to perform a specific process, such as a dedicated circuit. More specifically, the processor 51 is configured as a circuit in which circuit elements such as semiconductor elements are combined.
The processor 51 can derive the wavefront aberration of the lens apparatus 1 before adjustment by executing the wavefront aberration derivation program stored in the ROM. The wavefront aberration is derived as follows.
First, the processor 51 controls the digital camera body to move the fourth lens 14 in a plurality of positions in the optical axis direction Z, and in a state where the fourth lens 14 is located at each position (in other words, in a state where the imaging position of the resolution map 2 and the relative position of the image pickup element 3 are respectively a plurality of values based on the resolution of the lens device 1 before adjustment), the resolution map 2 is imaged by the image pickup element 3 by the lens device 1 before adjustment.
In fig. 2, the fourth lens 14 is moved in the optical axis direction Z to change the relative position between the imaging position of the lens device 1 before adjustment and the imaging element 3, but the present invention is not limited thereto. For example, by fixing the fourth lens 14 of the lens apparatus 1 before adjustment to a predetermined position, fixing the position of the resolution map 2, and moving the image pickup device 3 in the optical axis direction Z, the relative position between the imaging position of the lens apparatus 1 before adjustment and the image pickup device 3 can be changed. Further, by fixing the fourth lens 14 of the lens apparatus 1 before adjustment to a predetermined position, fixing the position of the image pickup device 3, and moving the resolution map 2 in the optical axis direction Z, the relative position between the imaging position of the lens apparatus 1 before adjustment and the image pickup device 3 can be changed.
When the captured image of the resolution map 2 (hereinafter, referred to as a captured image group) captured by the imaging element 3 in a state where the fourth lens 14 is located at each position is acquired from the digital camera body, the processor 51 derives the wavefront aberration of the lens device 1 before adjustment from the captured image group and the machine learning model 52.
Specifically, the processor 51 derives an MTF (Modulation Transfer Function: modulation transfer function) value, which is a resolution performance value of the lens apparatus 1 corresponding to the pattern PX1, from an image of the pattern PX1 included in each of the captured images constituting the captured image group. The MTF value shows a spatial frequency characteristic to which degree the contrast of the pattern to be evaluated can be faithfully reproduced. The set of MTF values corresponding to the pattern PX1 derived here is described as an MTF data group Gpx1.
The processor 51 derives an MTF value corresponding to the pattern PY1 from an image of the pattern PY1 included in each of the captured images constituting the captured image group. The set of MTF values corresponding to the pattern PY1 derived here is described as an MTF data group Gpy1.
The processor 51 derives an MTF value corresponding to the pattern PX2 from an image of the pattern PX2 included in each of the captured images constituting the captured image group. The set of MTF values corresponding to the pattern PX2 derived here is described as an MTF data group Gpx2.
The processor 51 derives an MTF value corresponding to the pattern PY2 from an image of the pattern PY2 included in each of the captured images constituting the captured image group. The set of MTF values corresponding to the pattern PY2 derived here is described as an MTF data group Gpy.
The processor 51 derives the feature amounts of the MTF data sets Gpx1, gpy1, gpx2, and Gpy. The MTF data set Gpx1, the MTF data set Gpy1, the MTF data set Gpx2, and the MTF data set Gpy are sets of resolution performance values of the lens apparatus 1 before adjustment, respectively. The feature quantity is resolution performance information indicating the resolution performance of the lens apparatus 1 derived from the set. Fig. 4 shows summary contents of information acquired from the captured image group.
The spatial frequencies of the patterns PX1, PY1, PX2, and PY2 are the same, but the patterns PX1, PY1, PX2, and PY2 may be divided into two patterns, i.e., a high-frequency pattern having a high spatial frequency and a low-frequency pattern having a low spatial frequency. In this case, from the captured image group, groups of MTF data groups Gpx1, MTF data groups Gpy1, MTF data groups Gpx2, and MTF data groups Gpy2 for 2 pattern amounts of the high-frequency pattern and the low-frequency pattern are generated. That is, from the captured image group, feature amounts of 8 pattern amounts are generated.
Hereinafter, a method for deriving the feature values of each of the MTF data group Gpx1, the MTF data group Gpy, the MTF data group Gpx2, and the MTF data group Gpy will be described by taking the MTF data group Gpx1 as an example. The method for deriving the feature values of each of the MTF data group Gpy1, MTF data group Gpx2, and MTF data group Gpy is the same as the method for deriving the feature values of the MTF data group Gpx1, and therefore, description thereof is omitted. The derived feature quantity includes two patterns, namely, a first feature quantity and a second feature quantity.
Fig. 5 is a diagram showing an example of the MTF data group Gpx 1. The horizontal axis of fig. 5 indicates the position of the fourth lens 14 (focus lens position), in other words, the relative position of the imaging position of the lens apparatus 1 and the image pickup element 3 before adjustment. The vertical axis of fig. 5 represents the MTF value corresponding to the pattern PX1 obtained in the photographing at each focus lens position. The white circles in fig. 5 represent MTF values corresponding to the focus lens positions.
(method of deriving first feature quantity)
The processor 51 obtains the maximum value and the minimum value of the MTF data group Gpx1 from the differential value of the MTF data group Gpx 1. Fig. 5 shows minima B1, minima B2, minima B3, minima B4, and maxima P1, maxima P2, and maxima P3, which are obtained from the differential values. The processor 51 uses the maximum values P1, P2, and P3 as the first feature values of the MTF data group Gpx 1.
The MTF data group Gpx1 can be divided into a first group, a second group, and a third group according to the minimum values B1, B2, B3, and B4 derived from the differentiation. The first group is a group constituted by MTF values corresponding to respective focus lens positions from the focus lens position corresponding to the minimum value B1 to the focus lens position corresponding to the minimum value B2. The second group is a group constituted by MTF values corresponding to respective focus lens positions from the focus lens position corresponding to the minimum value B2 to the focus lens position corresponding to the minimum value B3. The third group is a group constituted by MTF values corresponding to respective focus lens positions from the focus lens position corresponding to the minimum value B3 to the focus lens position corresponding to the minimum value B4. Therefore, the maximum value P1 becomes the first feature quantity of the first group, the maximum value P2 becomes the first feature quantity of the second group, and the maximum value P3 becomes the first feature quantity of the third group.
The processor 51 calculates the average value of the MTF values included in each of the first to third groups, and takes them as the first feature quantity. The processor 51 may calculate an integrated value by considering each group as a histogram, and derive an MTF value that divides the integrated value into two halves as an average value for each group.
The processor 51 calculates variances of MTF values included in each of the first to third groups and takes them as the first feature quantity.
The processor 51 calculates standard deviations of the respective groups from the variances of the respective groups from the first group to the third group, and uses them as the first feature quantity.
The processor 51 calculates the skewness and kurtosis of each group from the average value and the standard deviation and moment parent functions of each of the first group to the third group, and uses them as the first feature quantity.
The moment parent function is a statistic represented by a function of the mean and standard deviation. Therefore, if the average value and standard deviation of each of the first to third groups are obtained, the 3-order moment representing the skewness of each of the first to third groups and the 4-order moment representing the kurtosis of each of the first to third groups can be calculated using the moment parent function. An example of the equation of the moment parent function, the 3-order moment, and the 4-order moment is shown below. E (X) represents the expected value of the variable X (MTF value). Mu represents the average value of X. Sigma represents the standard deviation of X. Instead of sigma, the square of the standard deviation can also be used. And r is the number of times.
Moment mother function: alpha r =E((X-μ)/σ) r
3 moment: alpha 3 =E((X-μ)/σ) 3
Moment of 4 th order: alpha 4 =E((X-μ)/σ) 4
(method of deriving the second feature quantity)
The processor 51 approximates the MTF data set Gpx1 with a gaussian function of at least one number (3 in the example of fig. 1) determined from the design data of the lens apparatus 1. Specifically, the processor 51 determines the parameters of each gaussian function by an optimization method such as the le Wen Beige-marquardt method so that the residuals between each gaussian function and the MTF data set Gpx1 prepared in advance are minimized. Fig. 5 shows curves of gaussian functions G1, G2, and G3 to which the determined parameters are applied. The processor 51 derives, as second feature amounts, a maximum value of the gaussian function G1 and a focus lens position at the maximum value, a maximum value of the gaussian function G2 and a focus lens position at the maximum value and a maximum value of the gaussian function G3 and a focus lens position at the maximum value, respectively.
By approximation based on the gaussian function, MTF data set Gpx1 approximates the fourth group represented by gaussian function G1, the fifth group represented by gaussian function G2, and the sixth group represented by gaussian function G3.
The mean and standard deviation of the MTF values included in the fourth group are included in the parameters of the gaussian function G1. Therefore, the processor 51 takes the average value and the standard deviation included in the parameters of the gaussian function G1 as the second feature quantity. Similarly, the processor 51 uses the average value and the standard deviation included in the parameters of the gaussian function G2 as the second feature quantity, and uses the average value and the standard deviation included in the parameters of the gaussian function G3 as the second feature quantity. The variances of the fourth to sixth groups may be derived by squaring the standard deviations, and used as the second feature quantity.
The processor 51 calculates the skewness and kurtosis of each group from the average value, the standard deviation, and the moment parent function of each of the fourth group to the sixth group, and uses them as the second feature quantity.
Thus, the characteristic quantity F of the MTF data set Gpx1 shown in FIG. 4 px1 The information is composed of a maximum value (or maximum value) of each of a plurality of groups constituting the MTF data group Gpx1, a focus lens position corresponding to the maximum value (or maximum value), an average value, a variance, a standard deviation, a skewness, and a kurtosis.
Further, the characteristic quantity F of the MTF data group Gpy1 shown in fig. 4 py1 The information is composed of a maximum value (or maximum value) of each of a plurality of groups constituting the MTF data group Gpy1, a focus lens position corresponding to the maximum value (or maximum value), an average value, a variance, a standard deviation, a skewness, and a kurtosis.
Also, the feature quantity F of the MTF data group Gpx2 shown in fig. 4 px2 The information is composed of a maximum value (or maximum value) of each of a plurality of groups constituting the MTF data group Gpx2, a focus lens position corresponding to the maximum value (or maximum value), an average value, a variance, a standard deviation, a skewness, and a kurtosis.
Further, the characteristic quantity F of the MTF data group Gpy2 shown in fig. 4 py2 The information is composed of a maximum value (or maximum value) of each of a plurality of groups constituting the MTF data group Gpy, a focus lens position corresponding to the maximum value (or maximum value), an average value, a variance, a standard deviation, a skewness, and a kurtosis.
The machine learning model 52 is a model generated by machine learning teacher data obtained by simulation, and is a feature quantity F of 4 pattern quantities to be shown in fig. 4 px1 Characteristic quantity F py1 Characteristic quantity F px2 Characteristic quantity F py2 Lens device with resolution performance information as input and before adjustmentA model of wavefront aberration of 1 is set. The wavefront aberration output from the machine learning model 52 is a coefficient of a Zernike (Zernike) polynomial, but is not limited to this as long as it is information indicating an aberration component of the wavefront aberration.
The processor 51 sets the feature quantity F of 4 pattern quantities px1 Characteristic quantity F py1 Characteristic quantity F px2 Characteristic quantity F py2 When the wavefront aberration outputted from the machine learning model 52 in response to the input is acquired into the machine learning model 52, information of the adjustment amount of the position of the second lens 12 that satisfies the acquired wavefront aberration to the desired condition is generated and transmitted to the adjustment device 6. The data correlating the wavefront aberration of the lens apparatus 1 before adjustment with the information of the adjustment amount of the position of the second lens 12 is found by simulation and stored in advance in the ROM of the processor 51. Accordingly, the processor 51 reads in information of the adjustment amount of the position of the second lens 12 corresponding to the acquired wavefront aberration from the ROM, and sends the information to the adjustment device 6.
The adjustment device 6 sets the position of the second lens 12 based on the information received from the processor 51, and adjusts the position. In addition, the operator may perform adjustment of the position of the second lens 12 by a manual operation, a robot operation, or the like based on information of the adjustment amount of the position of the second lens 12 output from the processor 51, instead of the adjustment device 6. In the above, it is not necessary to extract the feature amounts from all the groups.
(method for generating machine learning model 52)
Next, a method of generating the machine learning model 52 will be described.
The machine learning model 52 is resolution performance information (feature quantity F of 4 pattern quantities shown in fig. 4) of a plurality of virtual lens apparatuses obtained by giving a plurality of patterns of assumed errors to each optical element of the lens apparatus 1 px1 Characteristic quantity F py1 Characteristic quantity F px2 Characteristic quantity F py2 ) And the simulation results of the wavefront aberrations of the plurality of virtual lens apparatuses. Machine learningThe learning model 52 is composed of at least one of a neural network, a convolutional neural network, gaussian process regression, support vector machine, multiple regression, and the like.
Fig. 6 is a diagram showing an example of teacher data used in machine learning. Fig. 6 shows design data with errors of n virtual lens apparatuses A1, A2, A3, A4, A5, … …, an (n is a natural number of 2 or more) having the same configuration as the lens apparatus 1. The virtual lens apparatuses A1, A2, A3, A4, A5, … …, an constitute virtual optical systems, respectively.
The design data with errors is data obtained by giving random errors to design values (shape, refractive index, etc.) of the first lens 11, the second lens 12, the third lens 13, and the fourth lens 14 of the lens apparatus 1 and design values (arrangement interval in the optical axis direction Z, etc.) concerning the arrangement of the first lens 11, the second lens 12, the third lens 13, and the fourth lens 14.
The errors to be given to the virtual lens apparatuses A1 to An are errors in the range of several times the various errors such as the assembly errors of the first lens 11, the second lens 12, the third lens 13, and the fourth lens 14, and the manufacturing errors of the first lens 11, the second lens 12, the third lens 13, and the fourth lens 14, for example, errors randomly given by monte carlo simulation.
The simulation result is associated with each of the n design data with errors. The feature values in the simulation results constitute the 1 st simulation result, and are simulation results of feature values (first feature values or second feature values) of the MTF data sets of 4 pattern values obtained when the virtual lens device Ak (k is any one of 1 to n) corresponding thereto is applied to the system of fig. 2. The wavefront aberration in the simulation results constitutes the 2 nd simulation result, and is the simulation result of the wavefront aberration (coefficient of zernike polynomial) of the virtual lens apparatus Ak (k is any one of 1 to n) corresponding thereto. In this way, each simulation result shown in fig. 6 is acquired and used as teacher data, and the machine learning model 52 is generated by machine learning the teacher data.
As described above, the processor 51 derives the wavefront aberration of the lens apparatus 1 from the plurality of pieces of resolution performance information obtained by changing the distance between the imaging position of the lens apparatus 1 and the position of the image pickup element 3 and the machine learning model 52. At least hardware for obtaining the captured image group may be used for deriving the wavefront aberration. Therefore, the wavefront aberration can be easily and inexpensively derived.
The machine learning model 52 is a feature quantity F of 4 pattern quantities to be shown in fig. 4 px1 Characteristic quantity F py1 Characteristic quantity F px2 Characteristic quantity F py2 The resolution performance information is input to output a model of the wavefront aberration of the lens apparatus 1 before adjustment. As described above, by setting the machine learning model 52 to a model in which wavefront aberration is output with a plurality of pieces of resolution performance information obtained from images of patterns of different image heights and different types as inputs, the accuracy of predicting wavefront aberration by the machine learning model 52 can be improved.
As the resolution performance information (feature amount) corresponding to one pattern of the resolution map 2, a maximum value (or maximum value) indicating a feature of the MTF data set, a focus lens position corresponding to the maximum value (or maximum value), an average value, a variance, a standard deviation, a skewness, and a kurtosis may be used. In this way, by learning the relation between the characteristics of the MTF data group represented by the plurality of pieces of information and the wavefront aberration, the accuracy of the prediction of the wavefront aberration based on the machine learning model 52 can be improved.
If the feature quantity (feature quantity used as teacher data) input to the machine learning model 52 includes at least one of a maximum value (or maximum value) indicating a feature of the MTF data set, a focus lens position corresponding to the maximum value (or maximum value), an average value, a variance, a standard deviation, a skewness, and a kurtosis, it is possible to predict the wavefront aberration based on the machine learning model 52.
As a result of the verification, it is found that if the feature amount (feature amount used as teacher data) input to the machine learning model 52 includes at least the maximum value (or maximum value), the accuracy of predicting the wavefront aberration by the machine learning model 52 can be improved. It is also known that if the feature quantity further includes the focus lens position, the skewness, and the kurtosis corresponding to the maximum value (or the maximum value), the accuracy of predicting the wavefront aberration by the machine learning model 52 can be further improved. That is, even if the average value, the variance, and the standard deviation are omitted from the first feature amount or the second feature amount, the influence on the accuracy of the prediction of the wavefront aberration by the machine learning model 52 is small.
(modification)
The lens device 1 has a configuration having only a lens as an optical element, but the optical elements included in the lens device 1 may include optical elements other than lenses, such as a diaphragm, a prism, and a mirror. The optical element to be adjusted in the lens device 1 may be plural instead of one.
The description has been made of the case where the lens apparatus 1 is applied to an image pickup apparatus, but the lens apparatus 1 may be applied to a projector. In this case, in the position adjustment system shown in fig. 2, by disposing the resolution map 2 at the position of the image pickup device 3, the resolution map 2 may be projected onto the screen via the lens device 1, and the image pickup device 3 may pick up the image projected onto the screen to acquire the picked-up image group. The lens device 1 may be an optical system used in an optical device other than a camera or a projector. The position adjustment system shown in fig. 2 is suitable for a case where the amount of positional deviation of the optical elements of the optical system in assembly needs to be measured.
The processor 51 obtains a resolution performance value (MTF value) using the captured image, but the present invention is not limited to the captured image, and other sensors may be used to obtain the resolution performance value.
In the above description, the resolution performance value is set to the MTF value, but the point-spread intensity or the line-spread intensity may be used instead of the MTF value.
As described above, at least the following matters are disclosed in the present specification.
(1)
A wavefront aberration deriving method for deriving a wavefront aberration of a lens optical system, the wavefront aberration deriving method comprising the steps of:
Step 1, obtaining resolution performance information of the lens optical system; a kind of electronic device with high-pressure air-conditioning system
And 2, inputting the resolution performance information into a machine learning model, and obtaining the wavefront aberration of the lens optical system from the machine learning model.
(2)
The wavefront aberration deriving method according to (1), wherein,
the machine learning model outputs coefficients of a zernike polynomial as aberration components of the wavefront aberration.
(3)
The wavefront aberration deriving method according to (1) or (2), wherein,
the resolution performance information is information based on a resolution performance value including resolution, dot diffusion intensity, or line diffusion intensity.
(4)
The wavefront aberration deriving method according to any one of (1) to (3), wherein,
in the above-mentioned step 1,
the resolution performance value corresponding to the relative position is acquired from an image captured by imaging the subject with the imaging element of the lens optical system in a state in which the relative position of the imaging element for imaging the subject with the lens optical system and the imaging position of the lens optical system is changed to a plurality of values, and the resolution performance information is acquired from the resolution performance value corresponding to the relative position.
(5)
The wavefront aberration deriving method according to (4), wherein,
in the step 1, a feature value of a data set including the resolution performance value corresponding to the relative position is derived, and the feature value is acquired as the resolution performance information.
(6)
The wavefront aberration deriving method according to (5), wherein,
in the step 1, the feature values of a plurality of groups included in the data set are derived.
(7)
The wavefront aberration deriving method according to (5) or (6), wherein,
in the step 1, the resolution performance value corresponding to a different image height of the captured image is acquired from the captured image obtained by capturing the subject with the imaging element that captures the subject with the lens optical system, and the feature quantity corresponding to a different image height of the captured image is derived.
(8)
The wavefront aberration deriving method according to any one of (5) to (7), wherein,
the feature quantity includes at least one of a maximum value, the relative position corresponding to the maximum value, an average value, a variance, a standard deviation, a skewness, and a kurtosis.
(9)
The wavefront aberration deriving method according to (8), wherein,
The feature quantity includes at least the maximum value.
(10)
The wavefront aberration deriving method according to any one of (5) to (7), wherein,
in the step 1, the data set is approximated by at least one gaussian function, and at least one of a maximum value of the approximated gaussian function, the relative position corresponding to the maximum value, an average value, a standard deviation, a variance, a skewness, and a kurtosis is derived as the feature quantity.
(11)
The wavefront aberration deriving method according to (10), wherein,
the feature quantity includes at least the maximum value.
(12)
The wavefront aberration deriving method according to any one of (8) to (11), wherein,
the skewness and the kurtosis are derived using the average, the standard deviation, and a moment parent function.
(13)
The wavefront aberration deriving method according to any one of (1) to (12), wherein,
the machine learning model learns a correspondence relationship between resolution performance information and wavefront aberration from simulation results of resolution performance information of a plurality of virtual optical systems obtained by giving a plurality of patterns of assumed errors to optical elements of the lens optical system and simulation results of wavefront aberration of the plurality of virtual optical systems.
(14)
A method for generating a machine learning model, wherein,
obtaining the 1 st simulation result of the resolution performance information of a plurality of virtual optical systems obtained by endowing the optical elements of the lens optical system with the assumed errors of a plurality of patterns,
a 2 nd simulation result of wavefront aberration of the plurality of virtual optical systems is obtained,
and generating a machine learning model according to the corresponding relation between the machine learning resolution performance information and the wavefront aberration of the 1 st simulation result and the 2 nd simulation result.
(15)
The method for generating a machine learning model according to (14), wherein,
the wavefront aberration is a coefficient of a zernike polynomial.
(16)
The method for generating a machine learning model according to (14) or (15), wherein,
the resolution performance information is information based on a resolution performance value including resolution, dot diffusion intensity, or line diffusion intensity.
(17)
The method for generating a machine learning model according to any one of (14) to (16), wherein,
the resolution performance information is information based on a resolution performance value corresponding to a relative position of an imaging position set in an optical axis direction of the virtual optical system and an imaging position of the virtual optical system in a state where the relative position is a plurality of values.
(18)
The method for generating a machine learning model according to (17), wherein,
the resolution performance information is a feature quantity of a data set including the resolution performance value corresponding to the relative position.
(19)
The method for generating a machine learning model according to (18), wherein,
the resolution performance information is the feature quantity of the plurality of groups included in the data group.
(20)
The method for generating a machine learning model according to (18) or (19), wherein,
the 1 st simulation result is a simulation result of a feature quantity of a data set including the resolution performance value corresponding to different image heights at imaging positions set in the optical axis direction of the virtual optical system.
(21)
The method for generating a machine learning model according to any one of (18) to (20), wherein,
the feature quantity includes at least one of a maximum value, the relative position corresponding to the maximum value, an average value, a variance, a standard deviation, a skewness, and a kurtosis.
(22)
The method for generating a machine learning model according to (21), wherein,
the feature quantity includes at least the maximum value.
(23)
The method for generating a machine learning model according to any one of (18) to (20), wherein,
The feature quantity includes at least one of a maximum value of the gaussian function when the data set is approximated by at least one gaussian function, the relative position corresponding to the maximum value, an average value, a standard deviation, a variance, a skewness, and a kurtosis.
(24)
The method for generating a machine learning model according to (23), wherein,
the feature quantity includes at least the maximum value.
(25)
The method for generating a machine learning model according to any one of (21) to (24), wherein,
the skewness and the kurtosis are derived using the average, the standard deviation, and a moment parent function.
(26)
A method of manufacturing a lens optical system having a plurality of optical elements including an adjusted optical element, comprising the steps of:
acquiring resolution performance information of the lens optical system;
inputting the resolution performance information into a machine learning model, and obtaining wavefront aberration of the lens optical system from the machine learning model; a kind of electronic device with high-pressure air-conditioning system
The position of the adjusted optical element is set based on the wavefront aberration.
(27)
A wavefront aberration deriving device derives a wavefront aberration of a lens optical system,
The wavefront aberration deriving device includes a processor that performs: the method includes the steps of acquiring resolution performance information of the lens optical system, inputting the resolution performance information into a machine learning model, and obtaining wavefront aberration of the lens optical system from the machine learning model.
(28)
A wavefront aberration derivation program that derives a wavefront aberration of a lens optical system, the wavefront aberration derivation program causing a computer to execute the steps of:
step 1, obtaining resolution performance information of the lens optical system; a kind of electronic device with high-pressure air-conditioning system
And 2, inputting the resolution performance information into a machine learning model, and obtaining the wavefront aberration of the lens optical system from the machine learning model.
While various embodiments have been described above with reference to the drawings, the present invention is not limited to this example. It is obvious that various modifications and corrections can be conceived by those skilled in the art within the scope of the claims, and it is understood that these modifications and corrections are, of course, also within the technical scope of the present invention. The constituent elements in the above embodiments may be arbitrarily combined within a range not departing from the gist of the invention.
In addition, the present application is based on Japanese patent application No. 2021-061084 (Japanese patent application No. 2021-061084) filed on 3/3 of 2021, the contents of which are incorporated by reference into the present application.
Symbol description
100-position adjustment system, 1-lens device, 2-resolution map, 21-first pattern, 22-second pattern, 24-virtual circle, PX1, PY1, PX2, PY 2-pattern, 3-image pickup element, 4-focus driving device, 5-computer, 51-processor, 52-machine learning model, 6-adjustment device, 10-lens barrel, 11-first lens, 12-second lens, 13-third lens, 14-fourth lens (focusing lens), K-optical axis, Z-optical axis direction, P1, P2, P3-maximum, B1, B2, B3-minimum, G1, G2, G3-gaussian function.

Claims (28)

1. A wavefront aberration deriving method of deriving a wavefront aberration of a lens optical system, the wavefront aberration deriving method comprising the steps of:
step 1, obtaining resolution performance information of the lens optical system; a kind of electronic device with high-pressure air-conditioning system
And 2, inputting the resolution performance information into a machine learning model, and obtaining the wavefront aberration of the lens optical system from the machine learning model.
2. The wavefront aberration deriving method according to claim 1, wherein,
The machine learning model outputs coefficients of a zernike polynomial as aberration components of the wavefront aberration.
3. The wavefront aberration deriving method according to claim 1 or 2, wherein,
the resolution performance information is information based on a resolution performance value including resolution, point spread intensity, or line spread intensity.
4. A wavefront aberration deriving method according to any one of claims 1-3, wherein,
in the step 1 of the process described above,
the resolution performance value corresponding to a relative position is acquired from an image captured by capturing an object with the image capturing element in a state in which the relative position of the image capturing element capturing the object by the lens optical system and an imaging position of the lens optical system is changed to a plurality of values, and the resolution performance information is acquired from the resolution performance value corresponding to the relative position.
5. The wavefront aberration deriving method according to claim 4, wherein,
in the 1 st step, a feature quantity of a data group including the resolution performance value corresponding to the relative position is derived, and the feature quantity is acquired as the resolution performance information.
6. The wavefront aberration deriving method according to claim 5, wherein,
in the 1 st step, the feature amounts of a plurality of groups included in the data group are derived.
7. The wavefront aberration deriving method according to claim 5 or 6, wherein,
in the step 1, the resolution performance value corresponding to the different image heights of the captured image obtained by capturing the subject with the imaging element that captures the subject through the lens optical system is acquired from the captured image, and the feature quantity corresponding to the different image heights of the captured image is derived.
8. The wavefront aberration deriving method according to any one of claims 5 to 7, wherein,
the feature quantity includes at least one of a maximum value, the relative position corresponding to the maximum value, an average value, a variance, a standard deviation, a skewness, and a kurtosis.
9. The wavefront aberration deriving method according to claim 8, wherein,
the feature quantity includes at least the maximum value.
10. The wavefront aberration deriving method according to any one of claims 5 to 7, wherein,
in the 1 st step, the data set is approximated with at least one gaussian function, and at least one of a maximum value of the approximated gaussian function, the relative position corresponding to the maximum value, an average value, a standard deviation, a variance, a skewness, and a kurtosis is derived as the feature quantity.
11. The wavefront aberration deriving method according to claim 10, wherein,
the feature quantity contains at least the maximum value.
12. The wavefront aberration deriving method according to any one of claims 8 to 11, wherein,
the skewness and the kurtosis are derived using the mean, the standard deviation, and a moment parent function.
13. The wavefront aberration deriving method according to any one of claims 1 to 12, wherein,
the machine learning model learns a correspondence relation between resolution performance information and wavefront aberration from simulation results of resolution performance information of a plurality of virtual optical systems obtained by providing an optical element of the lens optical system with an assumed error of a plurality of patterns and simulation results of wavefront aberration of the plurality of virtual optical systems.
14. A method for generating a machine learning model, wherein,
obtaining the 1 st simulation result of the resolution performance information of a plurality of virtual optical systems obtained by providing the optical elements of the lens optical system with the assumed errors of a plurality of patterns,
obtaining a 2 nd simulation result of wavefront aberration of the plurality of virtual optical systems,
and performing machine learning according to the corresponding relation between the simulation result 1 and the simulation result 2 and the wavefront aberration to generate a machine learning model.
15. The method for generating a machine learning model of claim 14, wherein,
the wavefront aberration is a coefficient of a zernike polynomial.
16. The method for generating a machine learning model according to claim 14 or 15, wherein,
the resolution performance information is information based on a resolution performance value including resolution, point spread intensity, or line spread intensity.
17. The method for generating a machine learning model according to any one of claims 14 to 16, wherein,
the resolution performance information is information based on a resolution performance value corresponding to a relative position of an imaging position set in an optical axis direction of the virtual optical system and an imaging position of the virtual optical system in a state where the relative position is a plurality of values.
18. The method for generating a machine learning model of claim 17, wherein,
the resolution performance information is a feature quantity of a data set including the resolution performance value corresponding to the relative position.
19. The method for generating a machine learning model of claim 18, wherein,
The resolution performance information is the feature quantity of a plurality of groups included in the data group.
20. The method for generating a machine learning model according to claim 18 or 19, wherein,
the 1 st simulation result is a simulation result of a feature quantity of a data set including a resolution performance value that is a value corresponding to a different image height at an image capturing position set in an optical axis direction of the virtual optical system.
21. The method for generating a machine learning model according to any one of claims 18 to 20, wherein,
the feature quantity includes at least one of a maximum value, the relative position corresponding to the maximum value, an average value, a variance, a standard deviation, a skewness, and a kurtosis.
22. The method for generating a machine learning model of claim 21, wherein,
the feature quantity includes at least the maximum value.
23. The method for generating a machine learning model according to any one of claims 18 to 20, wherein,
the feature quantity includes at least one of a maximum value of the gaussian function when the data set is approximated with at least one gaussian function, the relative position corresponding to the maximum value, an average value, a standard deviation, a variance, a skewness, and a kurtosis.
24. The method for generating a machine learning model of claim 23, wherein,
the feature quantity contains at least the maximum value.
25. The method for generating a machine learning model according to any one of claims 21 to 24, wherein,
the skewness and the kurtosis are derived using the mean, the standard deviation, and a moment parent function.
26. A method of manufacturing a lens optical system having a plurality of optical elements including an adjusted optical element, comprising the steps of:
acquiring resolution performance information of the lens optical system;
inputting the resolution performance information into a machine learning model, and obtaining wavefront aberration of the lens optical system from the machine learning model; a kind of electronic device with high-pressure air-conditioning system
Setting a position of the adjusted optical element according to the wavefront aberration.
27. A wavefront aberration deriving device derives a wavefront aberration of a lens optical system,
the wavefront aberration deriving device includes a processor that performs:
acquiring resolution performance information of the lens optical system; and
the resolution performance information is input into a machine learning model, and wavefront aberration of the lens optical system is obtained from the machine learning model.
28. A wavefront aberration derivation program that derives a wavefront aberration of a lens optical system, the wavefront aberration derivation program causing a computer to execute the steps of:
step 1, obtaining resolution performance information of the lens optical system; a kind of electronic device with high-pressure air-conditioning system
And 2, inputting the resolution performance information into a machine learning model, and obtaining the wavefront aberration of the lens optical system from the machine learning model.
CN202180095463.9A 2021-03-31 2021-12-20 Wavefront aberration deriving method, machine learning model generating method, lens optical system manufacturing method, wavefront aberration deriving device, and wavefront aberration deriving program Pending CN116964425A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2021061084 2021-03-31
JP2021-061084 2021-03-31
PCT/JP2021/047044 WO2022209056A1 (en) 2021-03-31 2021-12-20 Wavefront aberration derivation method, machine learning model generation method, lens optical system manufacturing method, wavefront aberration derivation device, and wavefront aberration derivation program

Publications (1)

Publication Number Publication Date
CN116964425A true CN116964425A (en) 2023-10-27

Family

ID=83455792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180095463.9A Pending CN116964425A (en) 2021-03-31 2021-12-20 Wavefront aberration deriving method, machine learning model generating method, lens optical system manufacturing method, wavefront aberration deriving device, and wavefront aberration deriving program

Country Status (3)

Country Link
JP (1) JPWO2022209056A1 (en)
CN (1) CN116964425A (en)
WO (1) WO2022209056A1 (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3342423B2 (en) * 1998-10-09 2002-11-11 ホーヤ株式会社 Eye optical system simulation device
JP2003075295A (en) * 2001-09-03 2003-03-12 Seiko Epson Corp Method and system for evaluating lens
JP2012185089A (en) * 2011-03-07 2012-09-27 Nikon Corp Measurement method and measurement device
JP5759232B2 (en) * 2011-04-04 2015-08-05 キヤノン株式会社 measuring device
JP2019191121A (en) * 2018-04-27 2019-10-31 キヤノン株式会社 Measurement method, adjustment method, and optical system manufacturing method
JP7277101B2 (en) * 2018-10-11 2023-05-18 キヤノン株式会社 Aberration estimation method, aberration estimation device, program and storage medium
WO2020158325A1 (en) * 2019-01-31 2020-08-06 富士フイルム株式会社 Optical component position adjustment support device, method for supporting optical component position adjustment, optical component position adjustment support program, and method for manufacturing lens device
JP2021060353A (en) * 2019-10-09 2021-04-15 キヤノン株式会社 Method for estimating internal error of optical system and measurement device
JP2021196230A (en) * 2020-06-11 2021-12-27 キヤノン株式会社 Optical characteristic acquisition method

Also Published As

Publication number Publication date
WO2022209056A1 (en) 2022-10-06
JPWO2022209056A1 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
KR101578711B1 (en) Defect detecting method
US7831105B2 (en) Method for determining the image quality of an optical imaging system
EP1990624B1 (en) Apparatus and method for evaluating an optical system
CN102782721B (en) System and method for runtime determination of camera calibration errors
JP6364193B2 (en) Focus position adjustment method and inspection method
JP6494205B2 (en) Wavefront measuring method, shape measuring method, optical element manufacturing method, optical device manufacturing method, program, wavefront measuring apparatus
JP2013167608A (en) Inspection device and inspection method
CN110261069B (en) Detection method for optical lens
CN108885089A (en) Focusing system for telecentric optics measurement machine
KR101826127B1 (en) optical apparatus for inspecting pattern image of semiconductor wafer
RU2680657C1 (en) Method for determining wave aberrations of optical system
CN116964425A (en) Wavefront aberration deriving method, machine learning model generating method, lens optical system manufacturing method, wavefront aberration deriving device, and wavefront aberration deriving program
EP1256831A1 (en) Method and system for calibrating an image range device
KR101555027B1 (en) Appapatus for three-dimensional shape measurment and method the same
JP7059406B2 (en) Optical member position adjustment support device, optical member position adjustment support method, optical member position adjustment support program, lens device manufacturing method
JP6961846B1 (en) EUV mask inspection device, EUV mask inspection method, EUV mask inspection program and EUV mask inspection system
JP2015105897A (en) Inspection method of mask pattern
JP7191632B2 (en) Eccentricity measurement method
JP2005090962A (en) Measuring method and measuring device of optical element
JP5967470B2 (en) Inspection device
JP2021060353A (en) Method for estimating internal error of optical system and measurement device
CN110873639B (en) Optical detection device
KR102475140B1 (en) Resolution test chart of the lens module and the resolution test device including the same
Moru Improving the pipeline of an optical metrology system.
Leonov Features of the development of systems for measuring the quality characteristics of optical systems of the visible spectrum

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination