CN114598859A - Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module - Google Patents

Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module Download PDF

Info

Publication number
CN114598859A
CN114598859A CN202011417294.XA CN202011417294A CN114598859A CN 114598859 A CN114598859 A CN 114598859A CN 202011417294 A CN202011417294 A CN 202011417294A CN 114598859 A CN114598859 A CN 114598859A
Authority
CN
China
Prior art keywords
curve
lens assembly
peak
focusing
defocusing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011417294.XA
Other languages
Chinese (zh)
Inventor
周广福
钟凌
廖海龙
潘梦鑫
曾权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Sunny Opotech Co Ltd
Original Assignee
Ningbo Sunny Opotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Sunny Opotech Co Ltd filed Critical Ningbo Sunny Opotech Co Ltd
Priority to CN202011520162.XA priority Critical patent/CN114598860A/en
Priority to CN202011417294.XA priority patent/CN114598859A/en
Publication of CN114598859A publication Critical patent/CN114598859A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Lens Barrels (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a method for prejudging the calibration performance of a lens assembly to be assembled, which comprises the following steps: 1) acquiring actual measurement defocusing curves of a plurality of identification patterns on a central view field and an edge view field of the lens assembly to be assembled based on the test light path; 2) acquiring the peak position of a defocusing curve of each identification pattern; 3) determining a focusing correction compensation parameter of the lens assembly; 4) under the premise of assuming that the astigmatism, the field curvature and the peak value of the lens assembly are not changed, calculating the peak value position of the simulated defocusing curve based on each actually-measured defocusing curve according to the compensation parameters; and 5) calculating the definition on each view field axis corresponding to each identification pattern under the determined compensation parameter, and further judging whether the imaging quality of the lens assembly to be assembled reaches the standard. The application also provides a camera module assembling method based on the calibratable prejudging method. This application can promote the packaging efficiency that focuses of sensitization subassembly and camera lens subassembly.

Description

Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module
Technical Field
The invention relates to the technical field of camera modules, in particular to a method for prejudging the calibration performance of a lens component to be assembled and a camera module assembling method.
Background
With the popularization of mobile electronic devices, technologies related to camera modules applied to mobile electronic devices for helping users to obtain images (e.g., videos or images) have been rapidly developed and advanced, and in recent years, camera modules have been widely applied to various fields such as medical treatment, security, industrial production, and the like. In recent years, the imaging quality of the camera module is more and more demanded by users, and accordingly, the demand for the camera module with high imaging quality is higher. In addition, in order to meet various photographing requirements, more and more electronic terminals are provided with the array type camera module. The array camera module comprises at least two camera modules, and some camera modules even have four or five camera modules. This results in a sudden increase in the quality and quantity requirements of the camera modules, which presents challenges to the existing production capacity.
The camera module generally includes a photosensitive assembly and a lens assembly. The photosensitive assembly includes a photosensitive chip, sometimes referred to as an image sensor. The image sensor is attached to the circuit board, and the circuit board, the image sensor, the lens seat and other parts are arranged on the circuit board to form the photosensitive assembly. The lens assembly typically includes an optical lens. The conventional method for assembling a camera module generally includes prefabricating a lens assembly and a photosensitive assembly separately, and then assembling (e.g., attaching) the two together. In the assembling process, the relative position of the lens assembly and the photosensitive assembly, especially the relative position of the optical axis of the optical lens and the photosensitive element, has a decisive influence on the imaging quality of the camera module, and the lens assembly and the photosensitive element need to be accurately positioned relative to each other. In the low pixel camera module, the assembly and fixation of the two can be realized by adopting a mechanical alignment mode, but the positioning accuracy of the mode is not high, and the imaging quality is possibly negatively affected, so that the high-end product series of the camera module is often difficult to be used.
In order to realize the accurate positioning of the optical assembly and the photosensitive assembly, the relative positions of the photosensitive assembly and the lens assembly are adjusted and assembled by adopting an active calibration mode, so that the imaging quality of a finished camera module is improved. Specifically, one of an optical component or a photosensitive component of the module (i.e., a camera module) can be used as a reference, and the other component is actively adjusted, so that the normal of the photosensitive chip is parallel to the optical axis of the lens component, and the center of the photosensitive chip coincides with the optical center of the lens component, so that the four corners and the central field of view area of the module can reach the optimal imaging definition, thereby exerting the imaging quality of the module to the maximum extent and improving the imaging level. More specifically, one way of assembly is: the photosensitive assembly to be assembled can be fixed in place, the photosensitive chip is lit up, and the mechanical device clamps the lens assembly and adjusts in six degrees of freedom. The other assembling mode is as follows: fix the centre gripping of lens subassembly, the sensitization subassembly is set up on an adjustment platform that can a plurality of degrees of freedom remove, through running out of focus curve, adjusts the relative position of lens subassembly for the sensitization subassembly, ensures that image center is clear and picture four corners resolution is even to fix (for example bond) the lens subassembly on the sensitization subassembly in suitable position. The imaging quality of a product can be effectively improved by an assembly mode based on an active calibration mode, however, the conventional active calibration mode completes the assembly of the optical assembly and the photosensitive assembly of a single module through continuous multiple steps, the production is long in time consumption and low in efficiency, UPH is difficult to improve, and the conventional active calibration mode is difficult to adapt to a large number of module production tasks in a short time.
Specifically, in the active calibration process, it is often necessary to measure a defocus curve of the lens assembly to be assembled by moving the lens (e.g., moving the lens by a motor) or the photosensitive chip, determine an actual tilt angle (tilt) of the lens assembly based on the defocus curve, and then adjust the tilt angle of the lens assembly using a clamping jaw that clamps the lens assembly (e.g., level the tilt). However, there is a certain systematic error in the mechanical adjustment of the clamping jaws, so that the lens or the photosensitive chip needs to be moved again to measure the defocus curve of the lens assembly after adjustment, so as to calculate the actual tilt angle of the lens assembly after adjustment based on the actually measured optical imaging data, and when the actual tilt angle still does not reach the standard, the clamping jaws need to be used again to adjust, and the defocus curve needs to be moved again until the actual tilt angle of the lens assembly to be assembled is within the preset range (for example, within ± 0.01 °). Because the lens or the photosensitive chip needs to be moved for many times to run the defocusing curve, the resolving power of the optical system at multiple positions is measured, the time consumption of each off-focusing curve is large, and the production efficiency is reduced. In particular, some lens assemblies to be assembled cannot meet the imaging quality through active calibration due to their own defects (e.g., excessive manufacturing tolerance of their optical elements, or excessive assembly tolerance during the assembly of the respective optical elements). For such a lens assembly (which may be referred to as an NG lens assembly, and which cannot meet the preset imaging quality requirement, i.e., an NG lens assembly), active calibration consumes a lot of time, and the production efficiency is seriously affected.
On the other hand, in order to improve the assembly efficiency, the photosensitive assembly is usually glued for active calibration so as to be bonded with the lens assembly immediately after the calibration is completed. However, if the current lens assembly to be assembled is finally found to be an NG lens assembly through active calibration, it is difficult to replace a new lens assembly in time to assemble with the photosensitive assembly that has been glued, so that the photosensitive assembly that may be good is scrapped together. An increase in the rejection rate will lead to an increase in cost.
Disclosure of Invention
The present invention is directed to overcome the deficiencies of the prior art, and provide a solution that can quickly and accurately pre-determine the calibration performance of a lens assembly to be assembled, thereby improving the assembly efficiency of a camera module and reducing the production cost.
In order to solve the above technical problem, the present invention provides a method for prejudging the calibrability of a lens assembly to be assembled, which comprises: 1) placing the lens assembly to be assembled in a test light path, and acquiring actually measured defocusing curves of a plurality of identification patterns on a central view field and an edge view field of the lens assembly to be assembled; 2) acquiring the peak position of a defocusing curve of each identification pattern; 3) determining a focusing correction compensation parameter of the lens assembly; wherein the compensation parameters comprise parameters characterizing pose and/or position adjustments of the lens assembly; 4) under the premise of assuming that the astigmatism, the curvature of field and the peak value of the lens assembly are not changed, calculating the peak value position of the simulated defocusing curve based on each actually-measured defocusing curve according to the determined compensation parameters; wherein the simulated defocus curve is: adjusting the inclination angle and the axial position of the lens component according to the determined compensation parameters to obtain a defocusing curve; and 5) calculating the definition on each view field axis corresponding to each identification pattern under the determined compensation parameter based on the peak position of the simulated defocusing curve calculated in the step 4), and further judging whether the imaging quality of the lens assembly to be assembled reaches the standard.
In the step 2), fitting the actually measured defocus curves of the plurality of identification patterns respectively, and then obtaining the peak position of the fitted defocus curve of each identification pattern; the step 3) further comprises the following steps: determining a focusing type and a focusing mode of the lens assembly for focusing correction, wherein the focusing type comprises S focusing, T focusing or average focusing; the focusing type comprises center focusing or edge focusing; in the step 4), the simulated defocus curve is: and under the determined focusing type and the focusing mode, adjusting the inclination angle and the axial position of the lens assembly according to the determined compensation parameters to obtain a defocusing curve. It is noted that in some embodiments, when the measured defocus curve has high measurement accuracy, step 2) may be omitted, i.e. the peak position and the corresponding peak value may be directly obtained through the measured defocus curve. The peak value and the peak position are used for calculation in the subsequent steps, the calculation is used for simulating the posture adjustment (namely, the inclination adjustment) of the lens assembly, and the simulated defocusing curve after the posture adjustment is calculated. In some embodiments, the lens assembly may be further simulated to perform axial position adjustment, and a simulated defocus curve after the axial position adjustment or after both the tilt angle and the axial position adjustment is calculated. The calculation basis may include: the original actual measurement defocusing curve of each identification pattern and the peak position of the defocusing curve after fitting of each identification pattern. The peak position represents the clearest imaging position of the corresponding identification pattern, and the clearest imaging position of each identification pattern after virtual correction of the lens assembly, namely the peak position of the simulated defocusing curve after virtual correction, can be searched based on the position and the compensation parameter. Therefore, the out-of-focus curve of the lens assembly when the posture (or the posture and the axial position) correction is supposed to be carried out on the lens assembly according to the compensation parameters can be estimated through simulation calculation without actually moving the lens assembly, so that the calibration performance of the lens assembly can be predicted.
Wherein, in the actually measured defocusing curve and the simulated defocusing curve, the resolving power is represented by an SFR value; the step 1) further comprises: and when the knife edge angle of the target plate of the test light path is not suitable for the SFR algorithm, rotating the knife edge angle based on affine transformation to be matched with the SFR algorithm, and further measuring the actually measured defocusing curve.
Wherein the step 1) comprises the following substeps: 11) firstly, obtaining a rotation matrix of the identification pattern by utilizing affine transformation, wherein the rotation matrix can rotate an original knife edge angle of a current target board to a target knife edge angle, and the target knife edge angle is in an angle range corresponding to an SFR algorithm; 12) then converting the original target plate image obtained by the test light path into a target plate image with the target knife edge angle based on the rotation matrix; and 13) acting an SFR algorithm on the target plate image with the target knife edge angle to obtain an SFR value, thereby obtaining the actually-measured defocusing curve.
In step 1), the marginal field of view is characterized by four identification patterns which are positioned at the upper left, the upper right, the lower left and the lower right.
Wherein, in the step 2), the method for obtaining the peak position of the fitted defocus curve comprises the following steps: 21) searching a maximum value in the actually measured defocusing curve and an axial position corresponding to the maximum value; 22) fitting the actually measured defocusing curve by using an N-th-order polynomial to obtain a fitted defocusing curve, wherein N is an integer; 23) then searching each maximum value point of the fitted defocusing curve and the axial position corresponding to the maximum value point; and 24) when the difference between a certain maximum value in the fitted out-of-focus curve and the maximum value of the actually-measured out-of-focus curve is smaller than the ratio of the maximum value of the actually-measured out-of-focus curve multiplied by a preset threshold value, directly judging that the maximum value is the peak value of the fitted out-of-focus curve, and obtaining the peak value position of the fitted out-of-focus curve.
Wherein, the method for obtaining the peak position of the fitted defocus curve further comprises: when the peak value of the fitted defocus curve cannot be determined in the step 24), executing the following steps: 25) fitting the actually measured defocusing curve again by using a K-th-order polynomial to obtain a defocusing curve after secondary fitting, and finally obtaining a peak value and a peak value position according to the defocusing curve after secondary fitting; wherein K is less than N, N is 6, 7 or 8, and K is 4 or 5.
Wherein, in the step 24), when the fitted defocus curve has a plurality of peaks, an average of the plurality of peaks is calculated according to a centroid method to convert a plurality of peak positions into a single peak position; when the fitted defocus curve has only one peak or no peak is found, executing the following steps: 25) fitting the actually measured defocusing curve again by using a K-th-order polynomial to obtain a defocusing curve after secondary fitting, and finally obtaining a peak value and a peak value position according to the defocusing curve after secondary fitting; wherein K is less than N, N is 6, 7 or 8, and K is 4 or 5.
Wherein the step 25) further comprises: and selecting the measured data points in the maximum neighborhood range from the measured defocus curve, and performing the quadratic fitting on the measured defocus curve by using a K-th-order polynomial on the basis of the measured data points in the maximum neighborhood range.
In the step 3), an image plane inclination angle is obtained according to the clear imaging position corresponding to each identification pattern of the marginal field of view, then an inclination angle compensation amount and a compensation direction for adjusting the image plane to a horizontal state are calculated, and the inclination angle compensation amount and the compensation direction are set as the compensation parameters.
In the step 3), the compensation parameters are set according to an artificial intelligence algorithm.
In the step 3), a man-machine interaction interface is provided and a user is prompted to input the compensation parameters.
In step 3), the compensation parameters include an axial position compensation amount and a compensation direction, and an inclination angle compensation amount and a compensation direction.
In the step 1), the marginal field of view is characterized by four identification patterns positioned at four corners and positioned at the upper left, the upper right, the lower left and the lower right; the step 4) comprises the following substeps: 41) assuming that the inclination adjustment of the lens assembly does not change the astigmatism, the field curvature and the image resolution peak value of the optical test system, the method is based on the following four condition component equations; the four conditions are that,
condition 1: (pLT + pRT) - (pLB + pRB) ═ W tan θy
Condition 2: (pLT + pLB) - (pRT + pRB) ═ H tan θx
Condition 3: (pLT + pRT + pLB + pRB)/4 ═ pCT + CF
Condition 4: for the corner with the smallest difference of the resolution force from the central visual field in the four corners, the peak position of the corner is kept unchanged in the virtual correction process; wherein, CF represents field curvature, pLT, pRT, pLB, pRB represent the peak position corresponding to the upper left, upper right, lower left, lower right marker pattern respectively, pCT represents the peak position corresponding to the central visual field, W and H represent the distance between the adjacent marker pattern centers of the marginal visual field in the x-axis direction and the y-axis direction respectively, and thetaxAnd thetayThe inclination angle of the lens assembly after virtual correction relative to the photosensitive surface of the photosensitive chip is the inclination angle component on the xoz plane and the inclination angle component on the yoz plane.
In the step 5), the resolving power value of each focusing type of each identification pattern of different fields of view is obtained based on the virtual focusing position of the photosensitive chip according to the actually-measured defocus curve in the step 1), and the virtual focusing position is the peak position of the simulated defocus curve calculated in the step 4).
In the step 5), interpolation is performed on the actually measured defocus curve, and then a resolving power value of each focusing type of each identification pattern of different fields of view is obtained based on the virtual focusing position.
In the step 5), a cubic spline interpolation algorithm is adopted to interpolate the actually measured defocus curve.
In the step 1), in the test light path, image data is sensed through a standard photosensitive chip or a photosensitive chip in a photosensitive component to be assembled.
According to another aspect of the present application, there is also provided a camera module assembling method, including: step A) based on any one of the methods for prejudging the calibrability of the lens assembly to be assembled, prejudging whether the current lens assembly to be assembled has the calibrability, if no calibrability exists, abandoning the lens assembly to be assembled, and if the calibrability exists, executing step B); and B) assembling the lens assembly to be assembled and the photosensitive assembly which are pre-judged through the calibration performance to obtain a complete camera module.
In the step B), the assembling is realized based on active calibration, and in the active calibration process, the actual posture and position of the lens assembly to be assembled are pre-adjusted by using the compensation parameters obtained in the step a).
Compared with the prior art, the application has at least one of the following technical effects:
1. the method and the device can quickly and accurately pre-judge the calibrability of the lens assembly to be assembled.
2. In some embodiments of the present application, the NG lens component that cannot be calibrated can be discarded based on the predetermined result, so as to avoid occupying valuable production capacity of the focusing assembly link due to the actual active calibration (or other actual focusing correction) performed on the NG lens component. Therefore, the focusing assembly efficiency of the photosensitive assembly and the lens assembly can be improved.
3. In some embodiments of the application, the NG lens assembly which cannot be calibrated can be abandoned based on the prejudgment result, so that the photosensitive assembly waste caused by the NG lens assembly is avoided, and the production cost is reduced.
4. In some embodiments of the application, the edge angle rotation of the target can be simulated based on affine transformation, so that the SFR algorithm can be suitable for more types of targets with different edge angles, and has strong expandability and compatibility.
5. In some embodiments of the application, a fast and stable axis value simulation algorithm is provided, whether a product is an OK product or not is judged by simulating the module axis value in advance, the OK product can adjust the inclination angle (namely adjust TILT) according to a pre-judgment result, and the NG product is intercepted in advance, so that the production efficiency of the camera module is improved. Among them, the OK product can be understood as a qualified semi-finished product, and the NG product can be understood as an unqualified semi-finished product.
Drawings
Fig. 1 is a flowchart illustrating a calibrability anticipation method of a lens assembly to be assembled according to an embodiment of the present application;
FIG. 2 shows a schematic view of a target employed in one embodiment of the present application;
3-5 illustrate measured defocus curves, fitted defocus curves, and quadratic fit defocus curves in some embodiments of the present application;
FIG. 6 shows defocus curves for different focus types and focus modes in an embodiment of the present application;
FIG. 7 shows the peak positions of simulated defocus curves in center focus calculated based on the defocus curves of FIG. 6;
FIG. 8 shows a simulated defocus curve calculated based on the defocus curve of FIG. 6 at center focus and after introduction of a 0.03 tilt angle perturbation;
FIG. 9 shows a simulated defocus curve at center focus calculated based on the defocus curve of FIG. 6 and introducing a 5 micron position perturbation;
FIG. 10 shows defocus curves before interpolation in an embodiment of the present application;
FIG. 11 shows a defocus curve after interpolation in one embodiment of the present application;
FIG. 12 illustrates an example target in one embodiment of the present application;
fig. 13 shows a schematic diagram of the rotation of a single test block in the present application.
Detailed Description
For a better understanding of the present application, various aspects of the present application will be described in more detail with reference to the accompanying drawings. It should be understood that the detailed description is merely illustrative of exemplary embodiments of the present application and does not limit the scope of the present application in any way. Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items.
It should be noted that the expressions first, second, etc. in this specification are used only to distinguish one feature from another feature, and do not indicate any limitation on the features. Thus, a first body discussed below may also be referred to as a second body without departing from the teachings of the present application.
In the drawings, the thickness, size, and shape of an object have been slightly exaggerated for convenience of explanation. The figures are purely diagrammatic and not drawn to scale.
It will be further understood that the terms "comprises," "comprising," "includes," "including," "has," "including," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Moreover, when a statement such as "at least one of" appears after a list of listed features, the entirety of the listed features is modified rather than modifying individual elements in the list. Furthermore, when describing embodiments of the present application, the use of "may" mean "one or more embodiments of the present application. Also, the term "exemplary" is intended to refer to an example or illustration.
As used herein, the terms "substantially," "about," and the like are used as terms of table approximation and not as terms of table degree, and are intended to account for inherent deviations in measured or calculated values that will be recognized by those of ordinary skill in the art.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
The application relates to a method for assembling a lens component and a photosensitive component into a camera module and a method for prejudging the calibration performance of the lens component to be assembled, which can be used in camera module assembly. In one case, the lens assembly may include a motor and an optical lens that may be mounted within a carrier of the motor that is controllably movable relative to a housing of the motor to perform various functions such as auto-focus, optical zoom, or optical anti-shake. The photosensitive assembly generally includes a photosensitive chip and a circuit board, and may also be referred to as a circuit board assembly. The motor base of the lens assembly can be attached to the surface of the circuit board, so that the lens assembly and the photosensitive assembly are assembled into a complete camera module, and the camera module can have various functions of automatic focusing, optical zooming or optical anti-shaking and the like. In another case, the lens assembly may not have a motor, i.e. the optical lens alone constitutes the lens assembly. The bottom surface of the optical lens can be used as an attaching surface to be bonded with the surface of the circuit board, so that a complete fixed-focus camera module is assembled. For convenience of description, the attachment surface of the lens assembly may be referred to herein as a second adhesive surface. In some embodiments, the photosensitive assembly may further include a filter assembly, and the filter assembly may include a lens holder and a filter mounted on the lens holder. The lens base can be a molded lens base directly formed on the surface of the circuit board, or can be formed in advance and then installed on the circuit board. The bottom surface of the lens base can be arranged on the surface of the circuit board. The top surface of the lens holder serves as an attachment surface (or referred to as a first bonding surface) to which the lens assembly is bonded. Namely, the top surface of the lens base is bonded with the motor base or the bottom surface of the optical lens to form a complete camera module. Herein, the calibrability refers to an ability to make the resolution of the lens assembly reach the standard through adjustment of the position and the posture. If the resolution of the lens assembly can reach the standard through the adjustment of the position and the posture, the lens assembly to be assembled is considered to be calibratable; and if the judgment result shows that the resolution of the lens assembly can not reach the standard through the adjustment of the position and the posture, the lens assembly to be assembled is considered to be uncorrectable.
The present application is further described with reference to the following drawings and detailed description.
Fig. 1 shows a flowchart of a method for determining calibrability of a lens assembly to be assembled according to an embodiment of the present application. Referring to fig. 1, the calibrability prediction method of the present embodiment includes the following steps S1-S5.
Step S1, placing the lens component to be assembled in the test light path, and obtaining the actually measured defocus curve of the lens component to be assembled. The test light path is provided with a target plate as a shooting object and a standard photosensitive chip. The standard photosensitive chip is used for receiving imaging data of a target board (specifically, a plurality of identification patterns representing a specific view field in the target board) of the lens assembly to be assembled, and further obtaining resolution data which can represent imaging quality under the corresponding view field. The resolution data may be, for example, an SFR value. In other embodiments, the resolution data may be other parameters that may characterize the resolution, such as MTF values or TV-Line values. The defocus curve is: changing the axial distance between the optical lens and the photosensitive chip in the test light path, respectively measuring the resolution data of each identification pattern on the corresponding target at each axial distance, and further drawing a curve of each identification pattern based on the measured data. In other words, each identification pattern can measure a measured defocus curve corresponding to the identification pattern, and in the measured defocus curve, the abscissa can represent the axial distance, and the ordinate can represent resolution data, such as an SFR value. The axial distance refers to a distance in the optical axis direction. In this embodiment, the lens assembly may be a motor lens assembly, i.e. the lens assembly is provided with a motor adapted to move the optical lens at least in the direction of the optical axis. In this way, when measuring the defocus curve, the axial distance between the optical lens and the photosensitive chip in the test optical path can be changed by the motor. In another embodiment, the axial distance between the optical lens and the photosensitive chip in the test light path can be changed by moving the standard photosensitive chip, so as to obtain the actually measured defocus curve. In yet another embodiment, the axial distance between the optical lens and the photosensitive chip in the test light path can be changed by the movement of the motor and the standard photosensitive chip at the same time, so as to obtain the measured defocus curve.
It should be noted that in step S1, any measured defocus curve is actually composed of a plurality of discrete points, where each discrete point represents an axial distance value and its corresponding measured resolution data. FIG. 2 shows a schematic view of a target employed in one embodiment of the present application. In this embodiment, each identification pattern at least represents two fields, namely an edge field and a central field, where the edge field may be, for example, 0.8 field (of course, the edge field may also be other values). Wherein the fringe field of view can be characterized by four logo patterns, which are top left, top right, bottom left, and bottom right logos, respectively. In the test light path, the reticle surface is substantially perpendicular to the optical axis of the optical lens. When using the SFR algorithm to calculate the resolution, the logo needs to have a certain tilt angle (see fig. 2), which is commonly referred to in the industry as the knife edge angle. In this embodiment, each identification pattern may obtain a different defocus curve according to the focusing type. Herein, the focusing type refers to S-direction focusing, T-direction focusing, or average focusing (average focusing in the S-direction and the T-direction). Wherein the S direction refers to the sagittal direction (i.e. the radial direction of the lens) and the T direction refers to the meridional direction (i.e. the tangential direction of the lens). The S-direction focusing means measures the resolution (e.g., SFR value) in the S direction during the defocus process, and the T-direction focusing means measures the resolution (e.g., SFR value) in the T direction during the defocus process. The average focusing means that the resolving power in the S direction and the T direction is measured in the defocusing process and the average value of the resolving power in the S direction and the resolving power in the T direction is taken. For each focusing type, each identification pattern can respectively measure an actually measured defocusing curve. Therefore, a plurality of actually measured defocusing curves can be obtained based on a plurality of identification patterns of the edge view field and the central view field, and the actually measured defocusing curves can be used for simulating partial posture and position adjustment of the lens component to be measured in a numerical calculation mode in the subsequent step, so that the calibration performance of the lens component to be measured can be judged in advance without actually adjusting the posture and the position of the lens component to be measured.
And step S2, fitting the actually measured defocus curve to obtain the peak position of the fitted defocus curve. Since the measured data may be interfered by various factors (e.g., environmental factors, tolerance of the measurement system, and manufacturing tolerance and assembly tolerance of the lens assembly itself), there are some abnormal situations such as multiple peaks, single side, jitter, etc. in the measured defocus curve. Therefore, in order to improve the accuracy of pre-judgment of the lens assembly to be detected, the actually-measured defocusing curve can be fitted to obtain a function analytic expression of the defocusing curve, and then the peak position and the peak value of the defocusing curve are analyzed, so that the lens assembly to be detected can be used in the subsequent steps.
Specifically, in the step S1, in the process of going out of focus (i.e. the process of changing the axial distance between the optical lens and the photosensitive chip in the test optical path), the defocus curve obtained by us has phenomena of multiple peaks, single edge, jitter, etc. due to the light source environment, the object distance, the dynamic TILT of the motor, the device vibration, etc., and sometimes the calculation of the peak position of the defocus curve is seriously affected. To above-mentioned problem, adopt pertinence curve fitting technique in this step to the true peak position of accurate fitting play curve promotes the out-of-focus precision of module.
In this embodiment, the curve fitting technique includes: a) firstly, the maximum value in the actually measured defocus curve and the index value corresponding to the maximum value are searched. The index value can represent an axial distance (i.e., a z-axis direction, i.e., a height direction of the optical lens), in this embodiment, the defocus curve is obtained by moving the photosensitive chip by a certain step length to obtain a resolution value of an image acquired by the photosensitive chip at a series of discrete axial positions, where the index value refers to a position of the photosensitive chip after each step of movement. Here, the maximum value refers to the value of each peak position in the actually measured defocus curve. b) And then fitting the curve by using an N-th-order polynomial to obtain a fitted defocusing curve. c) The maxima points (i.e., maxima and maxima indices) of the fitted defocus curve are then found. d) And then judging whether the peak value can be directly calculated according to the fitted defocusing curve. If the difference between a certain maximum value in the fitted defocus curve and the maximum value in the actual measurement defocus curve is smaller than the quotient of the maximum value in the actual measurement defocus curve and a preset difference determination coefficient M (the quotient of the maximum value in the actual measurement defocus curve and the preset difference determination coefficient M, namely the maximum value in the actual measurement defocus curve is multiplied by a preset threshold value proportion), directly determining that the maximum value is the maximum value (namely the peak value) of the fitted defocus curve, and the axial position corresponding to the peak value is the peak position, and then executing step S3; if no peak meeting the above condition is found in the fitted defocus curve, then substep e) is continued to search for peaks and peak positions. In the step, the difference judgment coefficient M is an empirical value, and if M is too small, the fluctuation is mistaken for a wave crest; when M is too large, multi-peak wave peaks can be missed; both of these situations may cause the final peak position to be fitted incorrectly. Therefore, M generally ranges from 6 to 12. In this embodiment, the fluctuation refers to fluctuation of measured data due to measurement tolerance in an actual measurement optical path and measurement system. And multimodal refers to a situation where the lens assembly itself has multiple peaks in the defocus curve due to its own manufacturing or assembly tolerances, for example. In said step d), the fluctuation introduced by the measurement tolerance may be filtered out based on a threshold (this threshold may be associated with the measured maximum, for example, may be a quotient of the maximum in the measured defocus curve and the preset difference judgment coefficient M), while retaining multiple peaks in the defocus curve. Further, sub-step e) is as follows.
e) And when the peak value cannot be directly calculated, fitting the curve by using a K-th-order polynomial to obtain a defocusing curve after secondary fitting, and finally obtaining the peak value and the peak value position according to the defocusing curve after secondary fitting. Wherein K is less than N. In this embodiment, N may be, for example, 6 to 8, and K may be, for example, 4 to 5. After the peak values and peak positions are determined, astigmatism and curvature of field can be further calculated. N and K are both integers. In particular, the quadratic fit may be a K-th order polynomial fit based on the measured data in the neighborhood of the maximum position of the measured defocus curve. The neighborhood range may be, for example, the maximum position of the measured data plus three measured data points before and after the maximum position, and measured data points further from the peak position may be dropped. And performing secondary fitting on neighborhood measured data based on the measured maximum position, so that a curve near the peak position can be well restored, and the obtained peak position and peak value are more accurate.
Fig. 3-5 illustrate measured defocus curves, fitted defocus curves, and quadratic fit defocus curves in some embodiments of the present application. Wherein fig. 3 shows a case where the measured defocus curve exhibits a single-edge form, fig. 4 shows a case where the measured defocus curve has a fluctuation, and fig. 5 shows a case where the measured defocus curve exhibits a multi-edge form. In fig. 3-5, the actually measured defocus curve is referred to as an original defocus curve, the fitted defocus curve is referred to as a high-order fitted curve, and the secondarily fitted defocus curve is fitted by using a peak curve, that is, K-order polynomial fitting is performed based on the neighborhood range of the peak position of the actually measured data.
Further, in an embodiment of the present application, in the sub-step d), when there is a multi-peak phenomenon in the fitted defocus curve, an average value and an average position of all valid peak values may also be calculated (for example, calculated based on a barycentric method), and the average value and the average position are taken as a peak value and a peak value position of the resolution curve of the lens assembly. That is, a plurality of peak positions are converted into a single peak position, and a plurality of peaks are converted into a single peak, so as to facilitate data processing in a subsequent step.
Further, in an embodiment of the present application, in the sub-step d), when the number of effective peaks in the fitted defocus curve (the higher-order fitted defocus curve) is not greater than 1 (actually, two cases are included, the number of effective peak points of the higher-order fitted curve is equal to 1 or equal to 0), peak curve fitting is performed, and a peak value position are obtained according to the peak value curve obtained by fitting. The present embodiment is different from the previous embodiment in that, in the sub-step d), when the fitted defocus curve does not have a multi-peak phenomenon, step e) is further performed, and a final peak position and peak value are obtained based on the peak curve. Here, if the difference between a certain maximum value in the fitted defocus curve and the maximum value in the measured defocus curve is smaller than the quotient of the maximum value in the measured defocus curve and the preset difference determination coefficient M, the maximum value point is a valid peak point (or simply referred to as a valid peak).
And step S3, determining the focusing type, the focusing mode and the compensation parameter of the lens assembly for focusing correction. Wherein the focusing type is as described in step S1. The focusing manner is to select center focusing or edge focusing. The edge focusing may be, for example, upper left, upper right, lower left, and lower right focusing, or focusing may be performed based on an average of the four. The central focusing is performed according to the identification pattern of the central view field. The compensation parameters refer to parameters for adjusting the posture and the position of the lens assembly. Attitude adjustment, i.e., tilt adjustment, may also be referred to as tilt adjustment. In this step, the position adjustment mainly refers to the adjustment of the axial position, i.e. the compensation of the axial position. It should be noted that the compensation parameters determined in this step are analog adjustments for numerical calculations, rather than actual adjustments to the lens assembly and its test optical path.
Fig. 6 shows defocus curves for different focusing types and focusing manners in an embodiment of the present application. These defocus curves may be defocus curves obtained by fitting based on corresponding measured data, and are labeled as defocus curves before simulation in fig. 6 in order to be distinguished from the following simulated defocus curves obtained by virtual correction. With reference to fig. 6, in this step, peak positions of the actual measurement defocusing curves of the upper left, upper right, lower left and lower right marks respectively correspond to clearest imaging positions (refer to axial positions, which are abscissa Pos in fig. 6) of the upper left, upper right, lower left and lower right marks, and based on these four positions, an image plane inclination angle (i.e., image plane tilt) of clear imaging can be obtained, and further, an optical axis inclination angle (optical axis tilt) of the lens assembly can be obtained. In this embodiment, the compensation target is set to level the image plane tilt, that is, adjust the optical axis to be vertical, so that the tilt angle adjustment amount in the compensation parameter should be consistent with the tilt angle of the image plane obtained based on the peak position of the actually measured defocus curve, and the tilt angle adjustment direction in the compensation parameter is opposite to the tilt direction of the image plane.
On the other hand, in the actual measurement of the lens assembly to be assembled in step S1, there may also be an axial deviation of movement due to various factors (such as light source environment, object distance, motor dynamic TILT, equipment vibration, etc.), that is, the peak position of the actual measurement defocus curve may not reflect the optimal focus position of the lens assembly to be assembled. Therefore, the axial position compensation can also be used as one of the compensation parameters in this step. The compensation amount and the compensation direction of the axial position compensation can be determined manually, and can also be identified and set by the device based on Artificial Intelligence (AI).
Similarly, in another embodiment of the present application, the compensation amount and the compensation direction of the tilt angle compensation of the lens assembly may also be determined manually or identified and set by the device based on Artificial Intelligence (AI).
Further, in the above embodiments, a human-machine interface may be provided at the control center of the device, wherein there are interactive interface graphics (e.g. input box and prompt text message thereof) for prompting the user to input the tilt angle compensation amount and the compensation direction, and interactive interface graphics (e.g. input box and prompt text message thereof) for prompting the user to input the axial position compensation amount and the compensation direction, so as to implement manual input of the compensation parameters.
In step S4, on the premise that astigmatism, curvature of field, and peak value of the lens assembly itself are assumed to be unchanged (note that the peak value is not the same as the resolution peak value of each defocus curve, but not the peak value position), the peak value position of the simulated defocus curve is calculated based on each actually measured defocus curve according to the determined focusing type, focusing method, and compensation parameters. Wherein, simulating the defocusing curve means: and under the determined focusing type and the focusing mode, after the inclination angle and the axial position of the lens assembly are adjusted according to the determined compensation parameters, based on a defocusing curve detected by the imaging system for detecting the lens assembly to be assembled. In this step, the simulated defocus curve is a virtual curve. Specifically, assuming that the lens assembly to be assembled is adjusted in inclination angle and axial position according to the determined compensation parameters, and then actually measured out of focus, a corresponding out-of-focus curve should be obtained. However, in this step, the lens assembly posture and position adjustment is not actually out of focus, but is simulated by numerical calculation (this numerical calculation-based simulation process is sometimes referred to as virtual correction herein so as to be distinguished from the actual adjustment process of the lens assembly posture and position). Further, the peak position of the simulated defocus curve after virtual correction can be directly obtained through numerical calculation. This simulated defocus curve is a simulation of the actual defocus curve, and its peak position is also a simulation of the peak position of the actual defocus curve. The peak position represents: the clearest imaging position (the peak position to be noted is the axial position) in the set focusing type and focusing manner, that is, the virtually corrected focusing position.
Specifically, the method for calculating the curvature of field and the astigmatism is as follows:
CFs=(pLTs+pRTs+pLBs+pRBs)/4-pCTs
CFt=(pLTt+pRTt+pLBt+pRBt)/4-pCTt
CF=(pLT+pRT+pLB+pRB)/4-pCT;
wherein CF represents field curvature, pLT, pRT, pLB, and pRB represent peak positions corresponding to upper left, upper right, lower left, and lower right identification patterns, pCT represents a peak position corresponding to a center view position, subscript S represents S-direction focusing, subscript T represents T-direction focusing, and no subscript represents average focusing.
XSLT=pLTs-pLTt
XSLB=pLBs-pLBt
XSRT=pRTs-pRTt
XSRB=pRBs-pRBt
Wherein XS denotes astigmatism, XSLT、XSRT、XSLB、XSRBAstigmatism of the upper left, upper right, lower left, and lower right marker patterns, respectively, pLT, pRT, pLB, and pRB, respectively, peak positions corresponding to the upper left, upper right, lower left, and lower right marker patterns,the subscript S denotes S-direction focusing, and the subscript T denotes T-direction focusing.
Further, the method of calculating the virtually corrected focusing position is as follows: assume (pLT, pRT, pLB, pRB) as the peak positions corresponding to the virtually rectified top left, top right, bottom left, and bottom right identification patterns. The virtual correction satisfies the following conditions:
condition 1: (pLT + pRT) - (pLB + pRB) ═ W tan θy
Condition 2: (pLT + pLB) - (pRT + pRB) ═ H tan θx
Condition 3: (pLT + pRT + pLB + pRB)/4 ═ pCT + CF
Condition 4: the peak position of the corner with the smallest difference in resolution from the central field of view (i.e., one of the four corners of upper left, upper right, lower left, and lower right) remains unchanged during the virtual correction. I.e. the peak position of this angle remains unchanged before and after the virtual correction. Here, the difference in resolving power between the upper left corner and the central field of view can be expressed as abs (pLT- (pCT + CF)). Where abs () represents an absolute value, the difference in the resolution of the other three corners from the central field of view may be represented by similar expressions, for example, the difference in the resolution of the top right corner from the central field of view may be represented as abs (pRT- (pCT + CF)), the difference in the resolution of the bottom left corner from the central field of view may be represented as abs (pLB- (pCT + CF)), and the difference in the resolution of the bottom right corner from the central field of view may be represented as abs (pRB- (pCT + CF)).
Wherein, W and H are the distance between the centers of the adjacent identification patterns of the fringe field of view in the x-axis direction and the distance in the y-axis direction, respectively, and herein, the x-axis and the y-axis are two coordinate axes perpendicular to the z-axis, respectively, and the x-axis and the y-axis are perpendicular to each other. (theta)x,θy) Is an angle compensation value, i.e., the tilt angle of the lens assembly after virtual correction (i.e., virtual tilt adjustment). The inclination angle of the lens assembly refers to an inclination angle of the lens assembly relative to a photosensitive surface of the photosensitive chip, and when the photosensitive chip is a standard photosensitive chip, a horizontal plane can be regarded as the photosensitive surface. ThetaxAnd thetayThe component of the tilt angle of the lens assembly in the xoz plane and the component of the tilt angle in the yoz plane, respectively. pCT is the peak of the original central field of viewThe value position. In the present embodiment, it is assumed that the peak position of the central visual field is constant during the virtual correction. CF is curvature of field, which is also constant during the virtual correction. The astigmatism remains unchanged during the virtual correction.
The virtually corrected four-corner peak positions (pLT, pRT, pLB, pRB) satisfying the above four conditions at the same time can be solved by using a computer numerical simulation technique.
Fig. 7 shows the peak positions of the simulated defocus curves in the center focus calculated based on the defocus curves of fig. 6. Fig. 8 shows a simulated defocus curve calculated based on the defocus curve of fig. 6 at center focus and after introducing a tilt angle perturbation of 0.03 °. Wherein introducing a 0.03 tilt angle disturbance may characterize an adjustment of the tilt angle of the lens assembly by 0.03 °.
Further, in a variant embodiment, when the virtual correction involves an adjustment of the axial position, a field curvature compensation value (Δ) is introducedS,ΔT) The field curvature is corrected as follows:
CFS=CFsS;CFT=CFtT;CF=CF+(ΔST)/2
then the corrected S-direction field curvature CF is usedSThe corrected T-direction field curvature CFTAnd substituting the corrected value of the average field curvature CF into an equation set constructed based on the four conditions, and solving the focusing position after virtual correction, namely solving the four-corner peak position (pLT, pRT, pLB, pRB) after virtual correction. In this modified embodiment, the field curvature compensation value (Δ)S,ΔT) Can be obtained from a priori knowledge. For example, when the virtual correction includes an axial position adjustment of the lens assembly, the software system may display a human-machine interface prompting the user to input an amount of the axial position adjustment and a corresponding field curvature compensation value (Δ)S,ΔT). Then according to the adjustment amount of the axial position input by the operator and the corresponding field curvature compensation value (delta)S,ΔT) And solving the equation set constructed based on the four conditions to obtain the four-corner peak positions (pLT, pRT, pLB and pRB) after virtual rectification.
Fig. 9 shows a simulated defocus curve at center focus calculated based on the defocus curve of fig. 6 and introducing a 5 micron position perturbation. Where a 5 micron positional disturbance may characterize a virtual 5 micron movement of the lens assembly or the photo-sensing chip along the z-axis. It should be noted that, in the above-described modified embodiment, when the virtual correction includes adjustment of the axial position, the introduced curvature-of-field compensation value is used only for calculation, and it does not mean that the curvature-of-field of the lens assembly under test itself is changed. The curvature of field of the lens assembly itself is determined by the physical factors of the lens assembly itself, such as the shape, material, surface type, assembly tolerance between lenses, etc., and is not generally changed by adjusting the position and posture (i.e., tilt angle) of the lens assembly. Similarly, the astigmatism and the peak value of the resolution power of the lens assembly cannot be changed due to the adjustment of the position and the posture of the lens assembly.
In step S5, based on the peak position of the simulated defocus curve calculated in step S4 (i.e., the virtually corrected in-focus position), the sharpness (which may be simply referred to as an axis value) on each view axis corresponding to each marker pattern under the determined compensation parameter is calculated. The axis value is a resolving power value of each view field axis corresponding to the virtually corrected focusing position. The values of the resolving power for each focus type of each identification pattern can be obtained from the measured defocus curve of step S1. After the axis values of each view field axis are obtained, whether the imaging quality of the lens assembly to be assembled reaches the standard or not can be judged according to the axis values.
Further, in an embodiment of the present application, interpolation processing may be performed on each measured defocus curve in step S1, and then an axis value corresponding to each measured defocus curve is found according to the virtually corrected focus obtained in step S4. The interpolation may be implemented, for example, based on a cubic spline interpolation algorithm. Since the step S1 is a series of discrete data actually based on a certain step size obtained by the defocus process, and the axial position corresponding to each field axis in step S5 may be located between the axial positions corresponding to two discrete data, if the distance between the axial positions corresponding to the two discrete data is larger (i.e. the step size when the step is out of focus is larger), the error of the obtained axis value will increase. Fig. 10 shows a defocus curve before interpolation in an embodiment of the present application. The peak position of the simulated defocus curve obtained after the simulation is marked. Fig. 11 shows a defocus curve after interpolation in an embodiment of the present application. The peak positions of the simulated defocus curves obtained after the simulation are also marked. With reference to fig. 10 and fig. 11, in this embodiment, the relatively sparse discrete data set may be converted into the relatively dense discrete data set through interpolation processing, and the axial position distance between adjacent discrete data after interpolation is reduced, so as to reduce or eliminate the error of the axis value calculation. On the other hand, since an interpolation algorithm can be used to reduce the error of the axis value calculation, when the defocus test (i.e., running defocus) is completed with a larger step size in step S1, it is still possible to have a smaller error of the axis value calculation, thereby ensuring the accuracy and stability of the calibratability prediction of the lens assembly to be assembled. Meanwhile, since the time for the out-of-focus test of step S1 can be shortened, it also helps to improve the speed of the calibratability anticipation of the lens assembly to be assembled.
Further, in one embodiment of the present application, the resolution is characterized by an SFR value. However, different test items and different items have different requirements on the test target, so different SFR algorithms exist for different targets. With the continuous change of the requirements of customers, the requirements for calculating the SFR values under different reticle knife edge angles exist, the traditional SFR algorithm is generally 3-8 degrees for testing the reticle knife edge angle, and the calculation accuracy of the SFR algorithm is influenced when the SFR algorithm exceeds the range. In view of the above problems, the present embodiment provides an SFR algorithm based on angle rotation, which introduces an angle rotation step on the basis of a conventional SFR algorithm, rotates an edge angle of a test block to a range of 3 to 8 degrees without changing the sharpness of the test block, and then performs SFR calculation by using the conventional SFR algorithm, thereby obtaining an SFR value at any edge angle. FIG. 12 illustrates an example target in one embodiment of the present application. In particular, the identification pattern in the target is typically substantially rectangular block-shaped and may therefore be referred to as a test block. And the edges of the test blocks and the edges of the target have a certain inclination angle, which is generally called the knife edge angle of the target. In this embodiment, the knife edge angles of the test blocks at the four corner positions of the target are not in the range of 3-8 degrees, and if the software system carried by the assembly equipment adopts the conventional SFR algorithm and the corresponding knife edge angle is 3-8 degrees, the test picture (also referred to as target picture for short) of the target cannot be directly subjected to SFR calculation by using the conventional SFR algorithm. To solve this problem, in this embodiment, the step S1 may include: firstly, affine transformation is utilized to obtain a rotation matrix of a test block, wherein the rotation matrix can rotate an original knife edge angle of the test block of a target plate of a test light path to a target knife edge angle, and the target knife edge angle is in an angle range corresponding to an SFR algorithm; then, converting the original target image into a target image with a target knife edge angle based on the rotation matrix (in the step, interpolation processing can be carried out on the rotation coordinate by utilizing cubic polynomial interpolation to obtain the image coordinate after rotation); and finally, acting the SFR algorithm on the target plate image with the target knife edge angle to obtain an SFR value. Typically, the target is a transparent rigid plastic sheet with special patterns printed thereon, a light box is provided above the target sheet, the target is illuminated downward, and then an imaging system (which may consist of a lens assembly to be assembled and a standard photo chip for testing) photographs the target sheet from bottom to top below the target sheet. In this embodiment, the target is practically immobile, and after the imaging system photographs the target paper, the photographed image information is adjusted by the algorithm, so that the original knife edge angle of the test block on the original target paper is rotated to the target knife edge angle adapted to the SFR algorithm. Therefore, the method can adapt to different standard plate knife edge angles, improves the compatibility of the SFR algorithm, and simultaneously improves the accuracy. It should be noted that, in this embodiment, for the reticle image captured in the test optical path, each test block can be individually rotated to have a target edge angle (e.g., an edge angle within 3-8 degrees). The rotation matrix may be derived based on the principles of affine transformation. Fig. 13 shows a schematic diagram of the rotation of a single test block in the present application. Referring to fig. 13, for a single test block, a coordinate system transformation may be performed first, moving the origin of coordinates o to the center of the test block. And then, rotating each position point on the test block by an angle theta based on the new coordinate origin, wherein the rotation can be realized based on affine transformation, namely the original coordinate of each position point is mapped into a new coordinate after affine transformation. And finally, fusing the rotated single test block into a new target image. The new target image is the new target image which makes the knife edge angle of each test block meet the requirements of the SFR algorithm after the test block is rotated. During the rotation, each position point changes only in position coordinates, and the value of the image data (for example, a value representing brightness, a value representing color, and the like) of the position point is not changed.
Further, in the above embodiment, the central field of view is characterized by one identification pattern (i.e., test block) located at the center, and the peripheral field of view is characterized by four identification patterns located at the top left, top right, bottom left, and bottom right, respectively. It is noted, however, that in some variant embodiments of the present application, the fringe field of view may also be characterized by a greater number of marker patterns, for example the fringe field of view may be characterized by eight marker patterns evenly distributed over the field of view ring. In addition, in other modified embodiments of the present application, the fringe field of view may also be characterized by four identification patterns located respectively at the top, bottom, left, and right. In still other variations of the present application, more fields of view may be provided on the target, for example, a center field of view, a 0.6 field of view, and a 0.8 field of view may be provided simultaneously.
Further, in some embodiments of the present application, the method for predicting the calibrability of the lens assembly to be assembled may also be directly applied to the active calibration process. In this embodiment, in the step S1, the standard photosensitive chip in the test optical path is replaced by the photosensitive chip in the photosensitive component to be assembled, that is, the image resolving power data measured in the defocus running process in this embodiment is the image resolving power data output by the actual photosensitive component to be assembled. In this embodiment, steps S2-S5 may be the same as the previous embodiments, and are not described herein again.
Further, according to an embodiment of the present application, there is provided a method for predicting the calibratability and a method for assembling a camera module, the method comprising:
step A, based on the method for prejudging the calibrability of the lens assembly to be assembled, prejudging whether the current lens assembly to be assembled has the calibrability, if no calibrability exists, abandoning the lens assembly to be assembled, and if the calibrability exists, executing step B.
And step B, assembling the lens assembly to be assembled and the photosensitive assembly which are pre-judged through the calibration performance to obtain a complete camera module. The assembling process may be implemented based on active calibration, wherein the posture and the position of the lens assembly to be assembled may be pre-adjusted by using data obtained in the pre-judging process of the lens assembly to be assembled, and the adjustment amount and the adjustment direction may be consistent with the compensation parameters in the foregoing step 3. Note that the pre-adjustment herein is an actual physical adjustment of the posture and position of the lens assembly to be assembled, and is not calculated virtually. After the pre-adjustment, the active calibration may be continuously performed, and finally, the relative positions of the lens assembly to be assembled and the photosensitive assembly with the best imaging quality are determined, and then the lens assembly and the photosensitive assembly are assembled (for example, assembled by bonding or welding) based on the relative positions determined by the active calibration, so as to obtain a complete camera module.
That is, in this embodiment, can be through the mode of the axial value simulation of high accuracy, realize before module actual detection and production, just got rid of the bad module in the module, avoid bad module to occupy module production time and production material, very big improvement the production efficiency of module, reduce module manufacturing cost simultaneously, wherein the NG module is if participating in production, extravagant production data on the one hand, on the other hand, through the module after processing (for example the point) technology is handled, its inside part is retrieved the difficulty or is difficult to retrieve, the part that originally can utilize has been wasted.
In addition, it should be noted that in some embodiments of the present application, when the measured defocus curve has high measurement accuracy, step S2 may be omitted, that is, the peak position and the corresponding peak value may be directly obtained through the measured defocus curve. The peak value and the peak position are used for calculation in the subsequent steps, the calculation is used for simulating the posture adjustment (namely, the inclination adjustment) of the lens assembly, and the simulated defocusing curve after the posture adjustment is calculated. In some embodiments, the lens assembly may be further simulated to perform axial position adjustment, and a simulated defocus curve after the axial position adjustment or after both the tilt angle and the axial position adjustment is calculated. The calculation basis may include: the original actual measurement defocusing curve of each identification pattern and the peak position of the defocusing curve after fitting of each identification pattern. The peak position represents the clearest imaging position of the corresponding identification pattern, and the clearest imaging position of each identification pattern after virtual correction of the lens assembly, namely the peak position of the simulated defocusing curve after virtual correction, can be searched based on the position and the compensation parameter. Therefore, the out-of-focus curve of the lens assembly when the posture (or the posture and the axial position) correction is supposed to be carried out on the lens assembly according to the compensation parameters can be estimated through simulation calculation without actually moving the lens assembly, so that the calibration performance of the lens assembly can be predicted.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (19)

1. A method for predicting the calibrability of a lens assembly to be assembled, comprising:
1) placing the lens assembly to be assembled in a test light path, and acquiring actually measured defocusing curves of a plurality of identification patterns on a central view field and an edge view field of the lens assembly to be assembled;
3) determining a focusing correction compensation parameter of the lens assembly; wherein the compensation parameters comprise parameters characterizing pose and/or position adjustments of the lens assembly;
4) under the premise of assuming that the astigmatism, the curvature of field and the peak value of the lens assembly are not changed, calculating the peak value position of the simulated defocusing curve based on each actually-measured defocusing curve according to the determined compensation parameters; wherein the simulated defocus curve is: adjusting the inclination angle and the axial position of the lens component according to the determined compensation parameters to obtain a defocusing curve; and
5) calculating the definition on each view field axis corresponding to each identification pattern under the determined compensation parameter based on the peak position of the simulated defocusing curve calculated in the step 4), and further judging whether the imaging quality of the lens assembly to be assembled reaches the standard.
2. The calibratability anticipation method according to claim 1, further comprising a step 2) performed between the step 1) and the step 3),
the step 2) comprises the following steps: fitting the actually measured defocusing curves of the plurality of identification patterns respectively, and then acquiring the resolution peak position corresponding to each identification pattern according to the fitted defocusing curve of each identification pattern;
the step 3) further comprises the following steps: determining a focusing type and a focusing mode of the lens assembly for focusing correction, wherein the focusing type comprises S focusing, T focusing or average focusing; the focusing type comprises center focusing or edge focusing;
in the step 4), the simulated defocus curve is: and under the determined focusing type and the focusing mode, adjusting the inclination angle and the axial position of the lens assembly according to the determined compensation parameters to obtain a defocusing curve.
3. The calibratability prediction method according to claim 2, wherein, in the actual measurement defocus curve and the simulated defocus curve, a resolving power is represented by an SFR value; the step 1) further comprises: and when the knife edge angle of the target plate of the test light path is not suitable for the SFR algorithm, rotating the knife edge angle based on affine transformation to be matched with the SFR algorithm, and further measuring the actually measured defocusing curve.
4. The calibratability anticipation method according to claim 3, wherein the step 1) comprises the sub-steps of:
11) firstly, obtaining a rotation matrix of the identification pattern by utilizing affine transformation, wherein the rotation matrix can rotate an original knife edge angle of a current target board to a target knife edge angle, and the target knife edge angle is in an angle range corresponding to an SFR algorithm;
12) then converting the original target plate image obtained by the test light path into a target plate image with the target knife edge angle based on the rotation matrix; and
13) and (3) acting the SFR algorithm on the target plate image with the target knife edge angle to obtain an SFR value, and further obtaining the actually-measured defocusing curve.
5. The calibratability prediction method according to claim 1, wherein, in step 1), the fringe field of view is characterized by four identification patterns located at top left, top right, bottom left, and bottom right.
6. The calibrability prediction method according to claim 2, wherein the method for obtaining the peak position of the fitted defocus curve in step 2) comprises the following steps:
21) searching a maximum value in the actually measured defocusing curve and an axial position corresponding to the maximum value;
22) fitting the actually measured defocusing curve by using an N-th-order polynomial to obtain a fitted defocusing curve, wherein N is an integer;
23) then searching each maximum value point of the fitted defocusing curve and the axial position corresponding to the maximum value point; and
24) when the difference between a certain maximum value in the fitted out-of-focus curve and the maximum value of the actually-measured out-of-focus curve is smaller than the ratio of the maximum value of the actually-measured out-of-focus curve multiplied by a preset threshold value, the maximum value is directly judged to be the peak value of the fitted out-of-focus curve, and the peak value position of the fitted out-of-focus curve is obtained.
7. The calibratability prediction method according to claim 6, wherein the method of deriving the peak position of the fitted defocus curve further comprises: when the peak value of the fitted defocus curve cannot be determined in the step 24), executing the following steps:
25) fitting the actually measured defocusing curve again by using a K-th-order polynomial to obtain a defocusing curve after secondary fitting, and finally obtaining a peak value and a peak value position according to the defocusing curve after secondary fitting; wherein K is less than N, N is 6, 7 or 8, and K is 4 or 5.
8. The calibratability prediction method according to claim 6, wherein in step 3), an image plane tilt angle is obtained according to a clear imaging position corresponding to each identification pattern of the fringe field of view, and then a tilt angle compensation amount and a compensation direction for adjusting the image plane to a horizontal state are calculated and set as the compensation parameters.
9. The calibratability prediction method according to claim 6, wherein, in the step 3), the compensation parameter is set according to an artificial intelligence algorithm or a human-computer interaction interface is provided and a user is prompted to input the compensation parameter.
10. The calibrability anticipation method according to claim 6, wherein, in the step 24), when the fitted defocus curve has a plurality of peaks, an average of the plurality of peaks is calculated according to a centroid method to convert a plurality of peak positions into a single peak position;
when the fitted defocus curve has only one peak or no peak is found, executing the following steps:
25) fitting the actually measured defocusing curve again by using a K-th-order polynomial to obtain a defocusing curve after secondary fitting, and finally obtaining a peak value and a peak value position according to the defocusing curve after secondary fitting; wherein K is less than N, N is 6, 7 or 8, and K is 4 or 5.
11. The calibratability prediction method according to claim 6, wherein, in the step 3), the compensation parameters further include an axial position compensation amount and a compensation direction, and a tilt angle compensation amount and a compensation direction.
12. The calibratability prediction method according to claim 6, wherein, in step 1), the marginal field of view is characterized by four identification patterns at four corners, i.e., top left, top right, bottom left, and bottom right;
the step 4) comprises the following substeps:
41) assuming that the inclination adjustment of the lens assembly does not change the astigmatism, the field curvature and the image resolution peak value of the optical test system, the method is based on the following four condition component equations; the four conditions are that,
condition 1: (pLT + pRT) - (pLB + pRB) ═ W tan θy
Condition 2: (pLT + pLB) - (pRT + pRB) ═ H tan θx
Condition 3: (pLT + pRT + pLB + pRB)/4 ═ pCT + CF
Condition 4: for the corner with the smallest difference of the resolution force from the central visual field in the four corners, the peak position of the corner is kept unchanged in the virtual correction process;
wherein, CF represents field curvature, pLT, pRT, pLB, pRB represent the peak position corresponding to the upper left, upper right, lower left, lower right marker pattern respectively, pCT represents the peak position corresponding to the central visual field, W and H represent the distance between the adjacent marker pattern centers of the marginal visual field in the x-axis direction and the y-axis direction respectively, and thetaxAnd thetayThe inclination angle of the lens assembly after virtual correction relative to the photosensitive surface of the photosensitive chip is the inclination angle component on the xoz plane and the inclination angle component on the yoz plane.
13. The method for calibrability prediction according to claim 6, wherein in the step 5), the values of the resolving power of each focusing type of each identification pattern of different fields of view are obtained according to the measured defocus curve of the step 1) based on the virtual focusing position of the photosensitive chip, where the virtual focusing position is the peak position of the simulated defocus curve calculated in the step 4).
14. The calibratability prediction method according to claim 13, wherein in step 5), the actually measured defocus curve is interpolated, and then a resolution value of each focusing type of each identification pattern of different fields of view is obtained based on the virtual focusing position.
15. The calibratability anticipation method according to claim 14, wherein, in the step 5), the actually measured defocus curve is interpolated by using a cubic spline interpolation algorithm.
16. The calibratability prediction method according to claim 1, wherein in the step 1), the test optical path senses image data through a standard photosensitive chip or a photosensitive chip in a photosensitive assembly to be assembled.
17. The calibratability anticipation method according to claim 7 or 10, wherein the step 25) further comprises: and selecting the measured data points in the maximum neighborhood range from the measured defocus curve, and then performing the quadratic fitting on the measured defocus curve by using a K-degree polynomial based on the measured data points in the maximum neighborhood range.
18. A camera module assembly method is characterized by comprising the following steps:
step a) is based on the method for prejudging the calibrability of the lens component to be assembled of any claim 1 to 17, whether the current lens component to be assembled has the calibrability is prejudged, if no calibrability exists, the lens component to be assembled is abandoned, and if the calibrability exists, the step B) is executed;
and B) assembling the lens assembly to be assembled and the photosensitive assembly which are pre-judged through the calibration performance to obtain a complete camera module.
19. The camera module assembly method according to claim 18, wherein in the step B), the assembly is performed based on active calibration, and in the active calibration process, the actual posture and position of the lens assembly to be assembled are pre-adjusted by using the compensation parameters obtained in the step a).
CN202011417294.XA 2020-12-07 2020-12-07 Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module Pending CN114598859A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011520162.XA CN114598860A (en) 2020-12-07 2020-12-07 Method for measuring defocusing curve of lens assembly
CN202011417294.XA CN114598859A (en) 2020-12-07 2020-12-07 Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011417294.XA CN114598859A (en) 2020-12-07 2020-12-07 Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202011520162.XA Division CN114598860A (en) 2020-12-07 2020-12-07 Method for measuring defocusing curve of lens assembly

Publications (1)

Publication Number Publication Date
CN114598859A true CN114598859A (en) 2022-06-07

Family

ID=81802843

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011520162.XA Pending CN114598860A (en) 2020-12-07 2020-12-07 Method for measuring defocusing curve of lens assembly
CN202011417294.XA Pending CN114598859A (en) 2020-12-07 2020-12-07 Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011520162.XA Pending CN114598860A (en) 2020-12-07 2020-12-07 Method for measuring defocusing curve of lens assembly

Country Status (1)

Country Link
CN (2) CN114598860A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115951500A (en) * 2023-03-15 2023-04-11 北京亮亮视野科技有限公司 AR module assembling method based on active alignment technology
CN116197652A (en) * 2023-04-27 2023-06-02 江西联益光学有限公司 Automatic assembling method, assembling machine and assembling system for split lens
CN116300129A (en) * 2023-03-01 2023-06-23 浙江大学 Optical lens centering device, image acquisition device and method
CN116372565A (en) * 2023-06-05 2023-07-04 江西联益光学有限公司 Automatic assembling method of split lens
EP4310568A1 (en) * 2022-07-18 2024-01-24 Aptiv Technologies Limited Lens alignment method, lens alignment apparatus, lens alignment software, and vehicle camera
CN117492162A (en) * 2023-12-29 2024-02-02 江西联益光学有限公司 Automatic assembling method and device for lens and chip

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4310568A1 (en) * 2022-07-18 2024-01-24 Aptiv Technologies Limited Lens alignment method, lens alignment apparatus, lens alignment software, and vehicle camera
CN116300129A (en) * 2023-03-01 2023-06-23 浙江大学 Optical lens centering device, image acquisition device and method
CN116300129B (en) * 2023-03-01 2023-09-26 浙江大学 Optical lens centering device, image acquisition device and method
CN115951500A (en) * 2023-03-15 2023-04-11 北京亮亮视野科技有限公司 AR module assembling method based on active alignment technology
CN116197652A (en) * 2023-04-27 2023-06-02 江西联益光学有限公司 Automatic assembling method, assembling machine and assembling system for split lens
CN116197652B (en) * 2023-04-27 2023-09-01 江西联益光学有限公司 Automatic assembling method, assembling machine and assembling system for split lens
CN116372565A (en) * 2023-06-05 2023-07-04 江西联益光学有限公司 Automatic assembling method of split lens
CN116372565B (en) * 2023-06-05 2023-09-01 江西联益光学有限公司 Automatic assembling method of split lens
CN117492162A (en) * 2023-12-29 2024-02-02 江西联益光学有限公司 Automatic assembling method and device for lens and chip
CN117492162B (en) * 2023-12-29 2024-04-02 江西联益光学有限公司 Automatic assembling method and device for lens and chip

Also Published As

Publication number Publication date
CN114598860A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN114598859A (en) Method for prejudging calibration performance of lens assembly to be assembled and method for assembling camera module
CN111034169B (en) Camera module and assembling method thereof
US8711275B2 (en) Estimating optical characteristics of a camera component using sharpness sweep data
CN111034168B (en) Camera module and assembling method thereof
CN110632727B (en) Optical lens, camera module and assembling method thereof
TWI510077B (en) Method for adjusting position of image pick-up element, camera module, method and device for fabricating the same
CN105657237B (en) Image acquiring device and its digital zooming method
US11442239B2 (en) Assembly device and assembly method for optical assembly
US11711604B2 (en) Camera module array and assembly method therefor
CN114813051A (en) Lens assembly method, device and system based on inverse projection MTF detection
JP5972993B2 (en) Position adjustment apparatus and position adjustment method
JP2011147079A (en) Image pickup device
CN111047651B (en) Method for correcting distorted image
US20230244057A1 (en) Imaging camera driving module and electronic device
JP5531883B2 (en) Adjustment method
JP2020530592A (en) Optical lens, camera module and how to assemble it
KR101819576B1 (en) Test apparatus and method for optical image stabilizer
CN108898585A (en) A kind of axial workpiece detection method and its device
CN112540436B (en) Split lens, first lens part, testing method, assembling method and camera module
CN113345024B (en) Method for judging assembly quality of camera module
CN114911066B (en) Method, device and equipment for assembling lens and display screen and storage medium
CN113945363B (en) Method for detecting displacement performance of camera module sensor
CN113341546B (en) Lens applied to machine vision object detection, image correction algorithm and detection system thereof
TW201326755A (en) Ranging apparatus, ranging method, and interactive display system
Liu Camera System Characterization with Uniform Illuminate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination