CN113014790A - Defocus conversion coefficient calibration method, PDAF method and camera module - Google Patents

Defocus conversion coefficient calibration method, PDAF method and camera module Download PDF

Info

Publication number
CN113014790A
CN113014790A CN201911320430.0A CN201911320430A CN113014790A CN 113014790 A CN113014790 A CN 113014790A CN 201911320430 A CN201911320430 A CN 201911320430A CN 113014790 A CN113014790 A CN 113014790A
Authority
CN
China
Prior art keywords
camera module
motor
nonlinear
phase difference
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911320430.0A
Other languages
Chinese (zh)
Other versions
CN113014790B (en
Inventor
郑龙伟
牛亚军
曾义闵
李斯坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911320430.0A priority Critical patent/CN113014790B/en
Publication of CN113014790A publication Critical patent/CN113014790A/en
Application granted granted Critical
Publication of CN113014790B publication Critical patent/CN113014790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

The application provides a DCC calibration method, a PDAF method and a camera module. The DCC calibration method is applied to a nonlinear camera module and comprises the following steps: determining a plurality of calibration object distances; dividing the effective focusing stroke of the nonlinear camera module into a plurality of equal parts, wherein each equal part corresponds to one step of a motor; acquiring the phase difference and the corresponding code value of the motor in each step under each of the plurality of calibration object distances; performing linear fitting on two-dimensional data consisting of the phase difference and the code value obtained under each calibrated object distance to obtain a slope parameter of a linear relation corresponding to each calibrated object distance, wherein the slope parameters of the linear relations corresponding to at least two calibrated object distances in the plurality of calibrated object distances are different; and determining the slope parameters of the linear relations corresponding to the plurality of calibrated object distances as the defocus conversion coefficients of the nonlinear camera module. By the technical scheme, the accuracy of PDAF of the nonlinear camera module can be improved, and user experience is improved.

Description

Defocus conversion coefficient calibration method, PDAF method and camera module
Technical Field
The present application relates to the field of camera technologies, and more particularly, to a defocus conversion coefficient calibration method, a PDAF method, and a camera module.
Background
Phase Detection Auto Focus (PDAF) is a method in which pairs of shielded pixels (shield pixels) are regularly inserted into an image sensor to sense a phase difference of a picture of a subject when a motor is at a current position. The principle is that the moving direction and the moving distance of a motor when a clear image is obtained are calculated according to phase difference information and a defocusing conversion coefficient, and then the motor pushes a lens to a corresponding position at one time to finish focusing.
The Defocus Conversion Coefficient (DCC) is a conversion coefficient for converting a phase difference into a drive current (corresponding to a motor displacement offset). When the phase difference is linear with the drive current (motor stroke), DCC is a slope parameter, corresponding to the slope value of a linear function.
With the continuous development of terminal technology and the diversification of user requirements, the camera module adopts some novel lenses or motors, so that a phase difference and a driving current (motor stroke) are in a nonlinear relation, and if the existing DCC calibration method is applied, a series of problems that focusing is not accurate due to inaccurate defocusing distance calculation, focusing is realized by subsequently needing more contrast, the whole focusing time is greatly increased, the user experience is influenced and the like can be caused.
Disclosure of Invention
The application provides a defocusing conversion coefficient calibration method, a PDAF method and a camera module, which can improve the precision of phase detection automatic focusing of a nonlinear camera module, realize quick and accurate focusing and improve user experience.
In a first aspect, a defocus transform coefficient calibration method is provided, which is applied to a nonlinear camera module, and includes: determining a plurality of calibration object distances; dividing the effective focusing stroke of the nonlinear camera module into a plurality of equal parts, wherein each equal part is reached by the one-step action of a motor of the nonlinear camera module; acquiring the phase difference of a picture of a motor of the nonlinear camera module at each step and a corresponding motor stroke or a corresponding current indication code value of the motor stroke under each of the plurality of calibration object distances; performing linear fitting on the two-dimensional data consisting of the phase difference and the code value obtained under each calibrated object distance to obtain slope parameters of linear relations corresponding to each calibrated object distance, wherein the slope parameters of the linear relations corresponding to at least two calibrated object distances in the plurality of calibrated object distances are different; and determining the slope parameters of the linear relations corresponding to the plurality of calibration object distances as the out-of-focus conversion coefficients of the nonlinear camera module.
In the embodiment of the application, a plurality of calibrated object distances are utilized to calibrate the defocusing conversion coefficient instead of a single object distance, slope parameters of a plurality of linear relations can be obtained, thus when the nonlinear camera module is subjected to phase detection automatic focusing, a proper slope parameter can be selected to convert the phase difference into the motor displacement offset, the position where the motor arrives is closer to the focusing position, the precision of the nonlinear camera module for carrying out the phase detection automatic focusing can be improved, the rapid and accurate focusing is realized, and the user experience is improved. When the hybrid focusing combining the phase detection automatic focusing and the contrast detection automatic focusing is adopted, the focusing time of the contrast detection automatic focusing can be reduced, and the rapid focusing is realized.
With reference to the first aspect, in one possible implementation manner, the determining multiple calibration object distances includes: acquiring a relation curve between the motor stroke and the object distance of the nonlinear camera module, wherein the relation curve between the motor stroke and the object distance is nonlinear; determining a plurality of test object distances according to a relation curve between the motor stroke and the object distances; driving the motor to move according to a preset step length under each test object distance of the plurality of test object distances, and acquiring the phase difference and the corresponding code value of the picture of the motor at each step; and determining the plurality of calibrated object distances according to the trend of the relation curve between the phase difference and the code value of the plurality of test object distances.
It should be understood that the relationship between motor stroke and object distance is similar to the relationship between motor stroke and drive current, all in a non-linear relationship.
With reference to the first aspect, in a possible implementation manner, the determining the plurality of calibration object distances according to a trend of a relationship curve between phase differences of the plurality of test object distances and code values includes: and selecting one test object distance as a calibration object distance from the test object distances with consistent linearity of the relation curves of the phase difference and the code value.
One test object distance is selected as a calibration object distance from the test object distances with the relationship curves of the phase difference and the code value in consistent linearity, and the DCC calibrated by the calibration object distance can be used for carrying out phase detection automatic focusing on a plurality of object distances with the relationship curves of the phase difference and the code value in consistent linearity.
With reference to the first aspect, in a possible implementation manner, the method further includes: and reading a code value corresponding to a far focus and a code value corresponding to a micro-distance which are burned in the nonlinear camera module.
With reference to the first aspect, in a possible implementation manner, the method further includes: and determining an object distance applicable region or a code value applicable region of the linear relation corresponding to each of the plurality of calibration object distances.
Therefore, when the phase detection automatic focusing is carried out, the slope parameter of the proper linear relation can be selected according to the corresponding object distance or the initial position of the motor to convert the phase difference, and the precision of the phase detection automatic focusing is improved.
With reference to the first aspect, in a possible implementation manner, the method further includes: and verifying the defocusing conversion coefficient under a plurality of non-calibrated object distances.
With reference to the first aspect, in a possible implementation manner, the verifying the defocus transform coefficient at multiple non-calibration object distances includes: determining a plurality of non-calibrated object distances according to a relation curve between the motor stroke and the object distance; setting a motor of the non-linear camera module at an initial position at each of the plurality of non-calibrated object distances; acquiring the phase difference of the motor at the initial position, and performing primary phase detection automatic focusing according to the phase difference of the initial position and the defocusing conversion coefficient to acquire a first focusing position of the motor; setting a motor of the nonlinear camera module at the initial position, and performing contrast detection automatic focusing to obtain a second focusing position of the motor; determining whether a difference in position between the first focus position and the second focus position satisfies a preset error.
And verifying the DCC at a plurality of non-calibrated object distances to ensure that a plurality of calibrated linear relation slope parameters are all available.
It should be understood that the first focus position and the second focus position in the embodiment of the present application may be represented by actual positions of the focus or the motor, such that the position difference between the first focus position and the second focus position is represented by a distance, and the unit of the preset error is also a distance unit, such as micrometer. The first and second focus positions may also be represented by drive currents or code values, such that a difference in position between the first and second focus positions is represented by a current difference or code difference, and the unit of the predetermined error corresponds to the unit of the current or code value, e.g., milliamps.
With reference to the first aspect, in a possible implementation manner, the plurality of non-calibrated object distances cover an object distance interval between a far focus and a macro of the camera module.
With reference to the first aspect, in a possible implementation manner, the initial position covers a full stroke of a motor of the camera module.
With reference to the first aspect, in a possible implementation manner, the nonlinear camera module satisfies any one or more of the following conditions: the motor stroke of the nonlinear camera module is in a nonlinear relation with the driving current; the focal length of the nonlinear camera module is in a nonlinear relation with the driving current; the image distance of the nonlinear camera module is in a nonlinear relation with the driving current; the phase difference of the nonlinear camera module is in nonlinear relation with the driving current.
With reference to the first aspect, in a possible implementation manner, the nonlinear camera module includes one or more of the following lenses: liquid lens, liquid crystal lens, adjustable lens, deformable mirror, deformable prism.
In a second aspect, an automatic focusing method for phase detection is provided, which is applied to a non-linear camera module, and includes: acquiring the phase difference of pictures when a motor of the nonlinear camera module is positioned at the current position; calculating an out-of-focus distance according to the phase difference and an out-of-focus conversion coefficient, wherein the out-of-focus conversion coefficient comprises slope parameters of a plurality of linear relations; and the motor of the nonlinear camera module moves to an in-focus position according to the instruction corresponding to the out-of-focus distance.
The defocusing conversion coefficient comprises slope parameters of a plurality of linear relations, when the nonlinear camera module is subjected to phase detection automatic focusing, the appropriate slope parameters can be selected to convert the phase difference into motor displacement offset, the position where the motor arrives is closer to the focusing position, the precision of the nonlinear camera module for phase detection automatic focusing can be improved, quick and accurate focusing is realized, and the user experience is improved. When the hybrid focusing combining the phase detection automatic focusing and the contrast detection automatic focusing is adopted, the focusing time of the contrast detection automatic focusing can be reduced, and the rapid focusing is realized.
With reference to the second aspect, in a possible implementation manner, the calculating the defocus distance according to the phase difference and the defocus conversion coefficient includes: selecting a slope parameter of a linear relation from the slope parameters of the linear relations according to the object distance applicable interval or the code value applicable interval; and determining the defocusing distance according to the phase difference and the slope parameter of the selected linear relation.
With reference to the second aspect, in a possible implementation manner, the nonlinear camera module satisfies any one or more of the following conditions: the motor stroke of the nonlinear camera module is in a nonlinear relation with the driving current; the focal length of the nonlinear camera module is in a nonlinear relation with the driving current; the image distance of the nonlinear camera module is in a nonlinear relation with the driving current; the phase difference of the nonlinear camera module is in nonlinear relation with the driving current.
With reference to the second aspect, in one possible implementation manner, the nonlinear camera module includes one or more of the following lenses: liquid lens, liquid crystal lens, adjustable lens, deformable mirror, deformable prism.
In a third aspect, a camera module is provided, which applies the method for calibrating the defocus transform coefficient in any one of the possible implementations of the first aspect and the first aspect.
In a fourth aspect, a camera module is provided, which applies the phase detection autofocus method in any possible implementation manner of the second aspect and the second aspect.
In a fifth aspect, an electronic device is provided, which includes the camera module in the third aspect or the fourth aspect.
Drawings
FIG. 1 is a schematic diagram of the working principle of contrast detection auto-focusing;
FIG. 2 is a schematic diagram of the working principle of the phase detection auto-focusing;
FIG. 3 is a schematic diagram of the working principle of hybrid focusing;
fig. 4 is a schematic cross-sectional view of a camera module;
FIG. 5 is a schematic diagram of the relationship between motor stroke and current in the camera module of FIG. 4;
FIG. 6 is a schematic diagram of phase difference versus code value for the camera module of FIG. 4;
fig. 7 is a schematic cross-sectional view of a camera module according to an embodiment of the present disclosure;
FIG. 8 is a schematic view of a motor stroke versus current curve for the camera module of FIG. 7;
FIG. 9 is a schematic diagram of a prior art DCC calibration method for calibrating a nonlinear optical system;
figure 10 is a schematic diagram of a DCC calibration method calibration nonlinear optical system provided by embodiments of the present application;
FIG. 11 is a schematic diagram illustrating a DCC calibration method for calibrating a nonlinear optical system according to an embodiment of the present application;
FIG. 12 is a schematic flow chart diagram of a calibration object distance selection method provided by an embodiment of the present application;
fig. 13 is a schematic diagram illustrating a relationship curve between a motor stroke and an object distance of the camera module according to the embodiment of the present disclosure;
FIG. 14 is a schematic phase difference versus current plot for a test object distance provided by an embodiment of the present application;
figure 15 is a schematic flow chart diagram of a DCC calibration method provided in an embodiment of the present application;
figure 16 is a schematic flow chart diagram of a DCC verification method provided by an embodiment of the present application;
figure 17 is a schematic diagram of DCC linearity in a DCC calibration method provided in an embodiment of the present application;
FIG. 18 is a schematic diagram illustrating the selection of a preset initial value of a motor in the DCC calibration method according to an embodiment of the present application;
figure 19 is a schematic diagram of a verification result in the DCC calibration method provided in the embodiments of the present application;
fig. 20 is a schematic flow chart of a PDAF provided in an embodiment of the present application.
Reference numerals:
10-a lens; 101-a lens group; 102-a lens barrel; 103-liquid lens; 104-a fixed focus lens group; 20-an image sensor; 30-a motor; 301-a stator; 302-a mover; 40-an optical filter; 50-a circuit board; 60-motor housing.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments.
The electronic devices referred to in embodiments of the present application may include handheld devices, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to a wireless modem. Cellular telephones (cellular phones), smart phones (smart phones), Personal Digital Assistants (PDAs), tablet computers, laptop computers (laptops), video cameras, video recorders, cameras, smart watches (smart watches), smart bracelets (smart wristbands), in-vehicle computers, and other electronic devices with imaging capabilities may also be included. The embodiment of the present application does not specifically limit the specific form of the electronic device, and in some embodiments, the electronic device in the embodiment of the present application may be a terminal or a terminal device.
For convenience of understanding, technical terms referred to in the embodiments of the present application are explained and described below.
The optical axis, which is the direction of the optical system conducting light, refers to the chief ray of the central field of view. For a symmetric transmission system, it is generally coincident with the optical system rotation centerline.
The focus is a convergence point of the light rays parallel to the optical axis after being refracted by the lens.
Focal length (focal length), also known as focal length, is a measure of the concentration or divergence of light in an optical system, and refers to the distance from the optical center of a lens to the focal point when an infinite scene is imaged clearly at the focal plane through the lens. For a fixed-focus lens, the position of the optical center is fixed and unchanged, so that the focal length is fixed; for a zoom lens, a change in the optical center of the lens results in a change in the focal length of the lens, and thus the focal length can be adjusted.
Focusing is the process of moving the lens or changing the focal length of the lens to achieve the clearest image in the focused area. The best sharpness of the image is when the contrast is maximum and the aberrations are minimum. Generally, the sharpness of the image is best when the focal point is on the image sensor.
Auto Focus (AF) is an image sensor (image sensor) that images and receives light reflected by a subject after passing through a lens by using the light reflection principle of the subject, and drives an electric focusing device to focus after being processed by a computer.
Contrast Detection Auto Focus (CDAF), which may be referred to as "contrast focus" for short, is mainly to drive a motor to move a lens position step by step according to a change of an image contrast value until the image contrast value reaches a maximum value, and the position with the maximum image contrast value is an alignment focus. CDAF has no preset focus point, and needs to find a point where the contrast or image contrast value reaches the maximum in a focus area as the lens repeatedly moves along the optical axis direction, and the point is used as a point where focusing is accurate.
Phase Detection Auto Focus (PDAF), which may be referred to as "phase focus" for short, is to regularly insert some shielding pixels (shield pixels) in pairs into an image sensor, where the shielding pixels have a function equivalent to human eyes and are used to sense a phase difference of a picture of a subject when a motor is at a current position. And calculating the motion direction and the moving distance of the motor when a clear image is obtained, namely the offset or the out-of-focus distance of the in-focus lens, according to the picture phase difference and the out-of-focus conversion coefficient of the current position, and then pushing the lens to the corresponding position by the motor once to finish focusing.
Laser Detection Auto Focus (LDAF), which can be called laser focus for short, mainly emits low-power laser to a shot object through an infrared laser sensor beside a camera module, the laser is received by a sensor after being reflected, the distance between the laser and the shot object is calculated, and then a motor can directly push a lens to a corresponding position to finish focusing.
The quasi-focus point is a point with accurate focusing, and the focus of the lens is positioned on an imaging surface.
In the phase-focus PDAF process, the detected phase difference needs to be converted into a defocus distance (defocus value), and the functional relationship applied by this conversion process is called a Defocus Conversion Coefficient (DCC). The DCC is a coefficient reflecting a relative relationship between a moving distance of the lens and a phase difference, and after the motor is determined, a driving current is given to the motor, and the motor drives the lens to move a corresponding distance. When the phase difference is linearly related to the driving current, the DCC may be a coefficient of the linear relationship between the phase difference and the driving current, that is, the DCC is a slope parameter and is a conversion coefficient for converting the phase difference into the driving current (corresponding to the position offset of the motor). The phase difference is linear with the drive current, and the DCC value corresponds to the slope value of a linear function. The drive current and the code value may be converted to each other, wherein the code value represents the drive current as a binary number in bits. For example, when the current motor position is 150code, and the phase difference of the picture at the current position is PD, the defocus distance at which the motor moves to the clear position when performing PDAF is PD × DCC, and the clear position at which the motor reaches is 150+ PD × DCC.
The defocus distance, i.e. the distance between the current position of the lens and the position of the lens at the in-focus position, can also be understood as the distance between the focal point and the imaging surface. The defocus distance has positive and negative directions, and for convenience of understanding, the code value is used as the unit of the defocus distance in the embodiment of the application.
With the continuous development of terminal technology and the diversification of user demands, people have higher and higher requirements on the photographing experience and the photographic artistic effect of the intelligent terminal, and the photographing function becomes a main index for evaluating the performance of terminal equipment. The automatic focusing function can quickly present clear pictures for users, and is one of the most important functions of the terminal equipment camera.
The process of auto-focusing is usually realized by driving the lens to move by a motor. According to different working principles, common automatic focusing methods mainly include contrast detection automatic focusing CDAF, phase detection automatic focusing PDAF, laser detection automatic focusing LDAF and hybrid focusing. The following briefly describes the principles of several automatic focusing modes with reference to fig. 1 to 3.
Fig. 1 shows a schematic diagram of the operation principle of contrast detection autofocus CDAF.
The contrast focusing mainly utilizes the maximum contrast (maximum contrast) when the scenery is imaged on the image sensor, and detects the contrast by gradually adjusting the lens until the final focusing is finished when the maximum contrast is detected. For a shot object at a certain object distance, the CDAF is to find the clearest point according to the difference of the definition of an imaging picture when the lens is at different positions. As shown in fig. 1, a current-driven motor (specifically, a mover in the motor) moves along the optical axis, and a motor (not shown) drives the lens 10 to move along with the motor, so that the focal position of the lens 10 moves accordingly. When a certain current is given, the lens 10 moves to a position (r) along with the motor, an image sensor in the camera module acquires an image of a scene to be shot when the lens 10 is at the position (r), and performs data analysis calculation on a certain region of the image (e.g., a central region of the image), mainly digitalizes an image of the certain region of the image to obtain a digitalized image represented in an integer matrix form, and then transmits the integer matrix to an image processor for processing to obtain a Focus Value (FV) (such as contrast and contrast) capable of relatively quantizing definition, and simultaneously records a current value (or converts the current value into a DAC value) at the moment.
When the given current is changed, the motor drives the lens to move in the optical axis direction to change the focus position, as shown in fig. 1, the lens 10 is sequentially located at the position (ii), the position (iii), the position (iv), and the position (v … …) as the current is changed, and the same process as that at the position (i) is repeated when the lens 10 is located at each position. When the camera module detects that the FV value is decreased, the motor drives the lens to move back and not move along the current direction. Thus, the lens 10 is continuously moved for a plurality of distances to obtain a set of values, i.e., a current value (or DAC value) and a focus value FV corresponding to the current. In fig. 1, the coordinate system corresponding to the lens 10 at each position shows the contrast of the image when the lens is at the corresponding position, and in the embodiment of the present application, the larger the focus value FV is, the clearer the image is. The camera module is used for comparing and screening out the numerical values with the maximum contrast, and driving the lens to return to the position corresponding to the current value, such as the position in the figure, (IV) according to the current value corresponding to the maximum FV value, so that the focus of the lens is positioned on the image sensor, the contrast value is maximum, and the image is clearest, namely the focusing is finished. When the CDAF process is reflected on a screen of a terminal used by a user, the image definition is a process of 'drawing a bellows' from blurring to definition and then blurring, and finally, the image definition is clear.
The CDAF is widely applied, and the cost of an applied image sensor is low. The host computer can realize automatic focusing only by a module capable of analyzing image data, such as an image processor (ISP) or a Micro Controller Unit (MCU), and the like, without additional auxiliary equipment or devices, and the module size can be made small. Based on image processing, the focus point can be arbitrarily set, and the user can focus on any object on the screen.
However, the focusing accuracy of the CDAF is inversely proportional to the focusing time, if a higher focusing accuracy is achieved, more position points need to be searched, the total time for automatic focusing is lengthened, and the focusing time of each step of the CDAF depends on the number of Frames Per Second (FPS) of the image sensor and the moving speed and the stable time of the motor. The CDAF is based on two-dimensional plane processing, the moving direction cannot be determined, the search can be performed only in one direction each time, and the continuous focusing effect of the video of the moving object is poor.
Fig. 2 shows a schematic diagram of the working principle of the phase detection autofocus PDAF.
The principle of the phase detection automatic focusing is mainly that the offset of a focusing lens is calculated according to phase difference information, so that the lens is rapidly moved to a focusing position to achieve rapid and accurate focusing. Taking (a) in fig. 2 as an example, the object emits light from all directions and is imaged on the image sensor 20 through the lens 10, and the aberration is minimal only when the light from different directions is imaged to the focal position of the image sensor 20. In fig. 2 (a), two light rays a and B are emitted from one point of the object, and after passing through the lens 10, the two light rays converge in front of the imaging surface, and there is a Phase Difference (PD), and the image of the object is blurred. In fig. 2 (b), two light beams are emitted from one point of the object, and after passing through the lens, the two light beams converge on the imaging surface without phase difference, so that focusing is successful, and the image of the object is clear. In fig. 2 (c), two light beams are emitted from one point of the object, and after the two light beams pass through the lens and are converged on the imaging plane, there is a Phase Difference (PD) which is opposite to the phase difference in fig. 2 (a), and the image of the object is blurred. When PDAF is carried out, the camera module (specifically, a calibration library in the camera module) calculates the phase difference of pictures formed by shot objects when the motor is positioned at the current position, and the phase difference is brought into a defocusing conversion coefficient DCC, so that the defocusing distance can be obtained. The motor can be driven to move corresponding distance according to the defocusing distance to reach the in-focus position so as to realize rapid focusing.
In the contrast type focusing process, the lens can reach the position near the quasi-focus after being moved for multiple times by a large step length, then the lens is moved by a small step length to realize accurate focusing, and the PDAF reduces the steps of moving the lens to the position near the quasi-focus into one step, so that the focusing time is greatly shortened.
However, the above-mentioned shielding pixel for phase difference detection can only receive half of the light from the lens, and the shielding pixel is darker than the normal pixel point, so it is necessary to compensate the photosensitive defect of the shielding pixel by interpolation. In addition, the actual focusing pixels cannot be arranged closely, the phase calculation precision is not high, and the focusing effect under weak light is poor. There is an error between the position actually reached by the motor and the actual in-focus position (there is an error of plus or minus 10 μm in general), and the PDAF can only reach the vicinity of the in-focus point in practical application, and cannot really reach the in-focus point.
In order to overcome the above-mentioned disadvantages of using CDAF alone or PDAF alone, the automatic focusing may use a CDAF and PDAF mixed focusing method. Fig. 3 shows a schematic diagram of the working principle of hybrid focusing.
As shown in the figure, the camera module first performs PDAF, and the motor drives the lens to reach the vicinity of the focus point in one step, and the specific process can refer to fig. 2 and the related description of fig. 2. Then, CDAF is performed, so that the motor drives the lens to finally reach the best clear point of the search through several small steps of contrast focusing, and the specific process can refer to fig. 1 and the related description of fig. 1. Through the mixed focusing process of the CDAF and the PDAF, the focusing speed is greatly improved relative to the independent CDAF, the focusing time is shortened, the focusing precision is also greatly improved relative to the independent PDAF, and the user experience is improved.
It should be understood that hybrid focusing may also employ a combination of LDAF and CDAF, or other automatic focusing approaches, which are not described in detail herein. The embodiment of the present application is described by taking only a hybrid focusing process of combining CDAF and PDAF as an example.
For the PDAF process, the above mentioned defocus transform coefficient DCC is also used to influence the focusing accuracy of PDAF. DCC is the coefficient of the phase difference versus current value (or DAC value, or code value) and may also be understood as a functional relation where DCC describes the relationship between the phase difference and the current value (or DAC, or code value). In practical applications, the DCC is known, and after obtaining the phase difference, the phase difference can be directly brought into the DCC, i.e. the defocus distance (generally in units of code values) can be obtained. The motor drives the lens to move for a corresponding distance according to the defocusing distance, and then the lens can reach a theoretical in-focus position. The theoretical focusing position and the actual focusing position have a certain error, and the analysis can know that the DCC also influences the actual focusing position of the motor, namely the focusing precision, besides the calculation precision of the phase difference influences the actual focusing position of the motor.
The DCC needs to be calibrated before practical application, and the calibration process of the DCC is a process of determining a relation coefficient or a functional relation between a phase difference and a current value (or DAC or code value), that is, determining the DCC.
The existing DCC calibration scheme is suitable for the design of a linear camera module, and the linear camera module in the embodiment of the application meets the following conditions: the motor stroke and the driving current are linear, the focal length and the driving current of the optical system (or the lens) are linear, the image distance and the driving current of the optical system (or the lens) are linear, the phase difference and the driving current are linear, and the like. The above "drive current" may also be replaced with a "DAC value" or a "code value". Where the code value is an indication code of the motor stroke, it is actually understood to be a representation in a computer language of the motor stroke. For example, if the rated stroke (length unit such as micrometer or millimeter) of the motor is divided into 1024 parts, the rated stroke of the motor can be represented by 10-bit binary, or if the rated stroke (length unit such as micrometer or millimeter) of the motor is divided into 4096 parts, the rated stroke of the motor can be represented by 12-bit binary. The stroke of the motor to a certain position can be represented by a binary computer language, i.e. a code value.
Fig. 4 shows a schematic cross-sectional view of a camera module, in which the camera module 100 may be a linear camera module, and is described in detail below with reference to the accompanying drawings.
As shown in fig. 4, the camera module 100 may include a lens 10, an image sensor 20, a motor 30, a filter 40, a circuit board 50, and the like. The lens 10 is used to image a subject scene on the image sensor 20. The image sensor 20 is a semiconductor chip, which includes several hundreds of thousands to several millions of photodiodes on its surface, and generates charges when irradiated with light, and the charges are converted into digital signals by an analog-to-digital converter chip. The lens 10 includes a lens group 101 and a lens barrel 102, and the lens group 101 is accommodated in an accommodating space of the lens barrel 102. The lens barrel 102 is connected to the motor 30, and during the auto-focusing process, the motor 30 can push the lens barrel 102 to move up and down (i.e. along the optical axis direction) so as to change the distance from the optical center of the lens 10 to the imaging surface (i.e. change the image distance) to obtain a clear image. Alternatively, the motor may be a Voice Coil Motor (VCM), such that the lens is actually driven by the mover of the motor 30. An optical filter 40 may be disposed between the lens 10 and the image sensor 20 for filtering out near infrared light that is not visible to human eyes but is sensitively reflected by the image sensor 20, so as to prevent the image sensor 20 from having a serious color cast problem during imaging. The circuit board 50 is used for transmitting electrical signals, and may be a Flexible Printed Circuit (FPC) or a Printed Circuit Board (PCB).
Referring to fig. 4, the motor 30 selected for the camera module 100 may be a linear motor, i.e., the motor stroke and the current (or called driving current) are in a linear relationship, when the current of the driving motor changes Δ i, the motor stroke changes Δ s, and the ratio of Δ s to Δ i is a constant that is not zero. The lens assembly 101 selected by the lens 10 may be a plastic lens assembly, a glass lens assembly, or a plastic and glass lens assembly. If the relative position of each lens in the lens group 101 is not changed, the focal length of the lens 10 is not adjustable, i.e. the lens 10 is a fixed focus lens. When the motor drives the lens to move, only the image distance of the lens (or the optical system) is changed. Since the motor stroke and the current are in a linear relationship, the motor motion drives the lens 10 to move only to change the image distance, and therefore, the image distance and the current of the lens 10 are also in a linear relationship. The change of the image distance changes the image definition of the shot object, and under the condition of the same object distance and different image distances, the phase difference of the pictures of the shot object is different, so that the phase difference and the current are in a linear relation.
Fig. 5 is a schematic diagram showing the relationship between the motor stroke and the current in the linear camera module. As shown in the figure, between the far focus (inf) and the macro (macro) of the camera module, the motor stroke and the current are approximately in a linear function relationship, that is, as the current increases, the motor stroke increases, and the ratio of the increment of the motor stroke to the increment of the current is constant.
It should be understood that the afocal inf and macro shown in fig. 5 may have been obtained prior to measuring the motor travel versus current plot, with only a segment of the linear relationship of motor travel versus drive current between the afocal and macro being selected for DCC calibration.
Because the stroke of the motor is in linear relation with the current, the motor drives the lens to change the image distance, and the image distance is also in linear relation with the current, so that different currents (or DAC values and code values) under different object distances and phase differences of corresponding shot pictures show consistent linear relation. Therefore, when the DCC calibration is performed, a single object distance is selected for calibration, and the DCC, namely the linear relation coefficient of the phase difference and the current (or the DAC value and the code value) can be obtained.
The method comprises the steps of dividing an effective focusing stroke of a camera module into multiple equal parts, specifically, when DCC calibration is carried out, reading code values of a far focus and a micro distance burnt in the camera module after a single object distance is selected, and dividing a code interval formed by the code value corresponding to the far focus and the code value corresponding to the micro distance into multiple steps. And the motor pushes the lens to a corresponding position according to the code value corresponding to each step, and the phase difference of the pictures shot under the selected single object distance is calculated. And after the motor finishes the focusing stroke, a group of phase differences and corresponding code values can be obtained, and curve fitting is carried out according to the phase differences and the corresponding code values.
Fig. 6 is a diagram showing the relationship between the phase difference and the code value calibrated at a certain object distance, and it can be seen that the phase difference and the code value are in a linear function relationship. The intersection point of the straight line and the abscissa in the figure represents a code value when the phase difference is 0, that is, when the lens is located at the in-focus position at the object distance for calibrating the DCC. It should be understood that the relationship between the phase difference and the code value is not a strict straight line in the actual calibration, and the straight line in fig. 6 can be understood as a fitted curve.
It should be noted that, in the drawings of the embodiments of the present application, the values of the horizontal and vertical coordinates are merely exemplary, and are only used for representing the relative magnitude of the values or the trend of the relationship between two variables represented by the horizontal and vertical coordinates, and should not be construed as limiting the embodiments of the present application.
It should be further noted that the current, DAC value, and code value in the embodiments of the present application may be converted to each other, and all of them may be understood as a representation of a driving instruction for a motor, and only the usage scenario or flow stage is different. For example, since the mover of the motor moves by the interaction force of the magnetic field between the energized coil and the magnet, the relationship between the motor stroke and the current (or DAC value) is often described in relation to the motor stroke.
With the continuous development of terminal technology, the requirements of users on the size and the photographing function of the intelligent terminal equipment are continuously improved. In the above-mentioned linear camera module, since the motor stroke and the driving current are in a linear relationship, the phase difference and the driving current are in a linear relationship. In the PDAF process, the camera module obtains the phase difference of the picture when the motor is located at the current position, and the defocusing distance can be obtained according to the phase difference and the DCC, wherein the defocusing distance can be understood as the current increment which needs to be increased or decreased on the current basis, and accordingly the motor can move by the motor stroke increment corresponding to the current increment. In a linear camera module, the ratio of the stroke increment of a motor to the current increment is fixed, and if continuous optical zooming or optical zooming with higher multiple is to be realized, the motor needs to drive a lens barrel (provided with a lens in the lens barrel) to move for a longer distance, and the thickness of a terminal needs to be increased correspondingly.
Therefore, in order to meet the requirements of users for functions such as terminal device thickness reduction, fast AF focusing, fast Optical Image Stabilization (OIS), continuous optical zooming, ultra-fine shooting, etc., the camera module needs to select a novel lens, such as a variable focus lens such as a liquid lens, a liquid crystal lens, and a tunable lens (Tlens), or a deformable mirror, a prism, and/or a motor system with some non-linear actions. These new structural designs and new lens materials may cause the driving current to be non-linear with the image distance of the optical system, the driving current to be non-linear with the focal length of the optical system, the driving current to be non-linear with the phase difference of the imaging system, or the driving current to be non-linear with the motor stroke, etc.
Fig. 7 shows a schematic cross-sectional view of a camera module 200 provided in an embodiment of the present application, which may be a non-linear camera module.
It should be understood that the non-linear camera module described in the embodiments of the present application satisfies any one or more of the following conditions: the motor stroke and the driving current are nonlinear, the focal length and the driving current of the optical system (or the lens) are nonlinear, the image distance and the driving current of the optical system (or the lens) are nonlinear, and the phase difference and the driving current are nonlinear. The above "drive current" may also be replaced with a "DAC value" or a "code value".
Referring to fig. 7, the camera module 200 includes a lens 10, an image sensor 20, a motor 30, a filter 40, a circuit board 50, a motor housing 60, and the like. Unlike the camera module 100, the lens 10 in the camera module 200 is a zoom lens. The lens 10 includes a liquid lens 103 and, optionally, a fixed focus lens group 104. The liquid lens 103 is a variable curvature lens, which is an optical element made of one or more liquids without mechanical connection, and the internal parameters of the optical element can be changed by external control, for example, the refractive index of the lens can be dynamically adjusted or the focal length can be changed by changing the surface shape (curvature) thereof.
The motor 30 shown in fig. 7 is used to convert electrical energy into mechanical energy, and the interaction between the magnetic field generated by the permanent magnet and the magnetic field generated by the energized coil is used to move the liquid lens 103. The motor 30 includes a stator 301 and a mover 302, the stator 301 is immovable with respect to the housing 60, and the mover 302 is located inside the stator 301 and is disposed opposite to the stator 301. In the camera module 200, the stator 301 includes a magnet, and the mover 302 includes a coil.
The camera module 200 further includes a driving circuit for inputting a driving current to the motor 30 to control the movement of the motor 30. Fig. 7 does not show a specific driving circuit diagram, and only shows a schematic diagram of controlling the motor 30 to pass the current I by a Digital Signal Processor (DSP) by way of example. The liquid lens 103 is connected to a mover 302 of the motor 30, and after a Digital Signal Processor (DSP) controls a coil (i.e., the mover 302) to be energized, the coil and the magnet will generate a relative motion under an ampere force action under the action of a magnetic field of the magnet (i.e., the stator 301), that is, the mover 302 moves along an optical axis direction relative to the stator 301, and drives the liquid lens 103 connected thereto to change a shape or a curvature, so as to change a focal length of the lens 10. When the current I introduced into the coil changes, the mover 302 of the motor 30 moves by a corresponding displacement. The motor 30 in the embodiment of the present application may be a linear motor, and may also be a nonlinear motor, and the embodiment of the present application is not particularly limited. When the motor 30 is used as a single motor, the stroke of the motor may be linear or non-linear with respect to the current. When the liquid lens 103 is mounted on the motor 30 alone or when the camera module 200 is assembled, the motor stroke and the driving current input by the driving circuit are in a nonlinear relationship.
In particular during the movement of the motor (for example auto-focusing and/or optical anti-shake), it is assumed that the driving current input by the driving circuit for the motor is InCorresponding motor stroke of SnAnd n is any positive integer. In the embodiment of the present application, the letter I indicates the current, and the letter S indicates the motor stroke.
When the driving circuit inputs the driving current I for the motorn-1Corresponding motor stroke of Sn-1Compared with a drive current of InMotor stroke of SnIn the case of (2), the increment of the drive current is Δ In=In-In-1Increment of motor stroke of Δ Sn=Sn-Sn-1
When the driving circuit inputs the driving current I for the motorn+1Corresponding motor stroke of Sn+1Compared with a drive current of InMotor stroke of SnIn the case of (2), the increment of the drive current is Δ In+1=In+1-InIncrement of motor stroke of Δ Sn+1=Sn+1-Sn
The motor stroke and the driving current are in a non-linear relationship, then (S)n-Sn-1)/(In-In-1) And (S)n+1-Sn)/(In+1-In) Not always equal, i.e. Δ Sn/ΔInAnd Δ Sn+1/ΔIn+1Not always equal. When the driving current is always increased or always decreased, Δ Sn/ΔInAnd Δ Sn+1/ΔIn+1Not equal.
It should be understood that the increment in the embodiment of the present application has positive and negative values, for example, the current increment may be positive, i.e. indicating that the current is increased, or the current increment may be negative, i.e. indicating that the current is decreased. The increment of the motor stroke and the increment of other parameters are similar and are not described again.
Similarly, the current changes the focal length of the lens 10 through mechanical or non-mechanical action, and the focal length of the lens 10 is in a non-linear relationship with the current. The image distance of the lens 10 and the current are in a nonlinear relationship, and the phase difference of the picture of the object scene and the current are also in a nonlinear relationship.
For example, fig. 8 shows a graph of the relationship between the motor stroke, the image distance, the focal length, the phase difference and the current of the camera module 200. It should be understood that the graph in fig. 8 is only used to show the relationship between any one of the above four variables (motor stroke, image distance, focal length, phase difference) and the current, and the specific values of the above four variables and the current are not limited in any way.
Taking the ordinate as the motor stroke as an example, it can be seen from the figure that the relationship between the motor stroke and the current is not a linear function, but a nonlinear relationship. That is, as the current increases, the motor stroke varies unevenly. For example, the motor stroke is divided into a plurality of stroke sections, and for any first stroke section of the plurality of stroke sections, when the current changes Δ I1Time, motor stroke change Δ S1Ratio of variation of motor stroke to variation of current Δ S1/ΔI1Is k1(ii) a For a second stroke interval adjacent to the first stroke interval, when the current changes by delta I2Time, motor stroke change Δ S2Ratio of variation of motor stroke to variation of current Δ S2/ΔI2Is k2;k1And k is2Not equal.
The above description has been made only by taking the case where the camera module includes the liquid lens. In some other embodiments of the present application, the lens in the camera module 200 may also include any one or more of a liquid lens, a liquid crystal lens, an adjustable lens (Tlens), a deformable mirror, a deformable prism, and the like, or the motor of the camera module 100 in fig. 4 may be replaced by a non-linear motor system. The new structural design and the new lens material can also form a nonlinear camera module, namely, the driving current and the image distance of the optical system are nonlinear, the driving current and the focal length of the optical system are nonlinear, the phase difference between the driving current and the imaging system is nonlinear, or the driving current and the motor stroke are nonlinear, and the like.
In order to reduce the thickness of camera module, satisfy the diversified demand of user, promote user experience, need select new camera lens structure and material in the design of camera module, probably cause above nonlinear relation.
It is mentioned above that the DCC is needed in the PDAF process of the camera module, but the existing DCC calibration method is only applicable to the linear camera module and is no longer applicable to the camera module in which the motor stroke and the driving current are in a nonlinear relationship.
Still referring to fig. 8, taking the ordinate as an example of the phase difference, it can be seen from the figure that the relationship between the phase difference and the current is not a linear function relationship, but is a nonlinear relationship, and it can be known that the phase difference and the code value are also nonlinear relationships. If the calibration of a single object distance is still performed according to conventional methods, the calibration curve is similar to the line L1 as shown in FIG. 9, and DCC is calibrated as the slope of the line L1. As shown in fig. 9, when the lens is located at position B on the nonlinear curve, the phase difference and the current thereof are respectively the ordinate and abscissa of B. If the DCC corresponding to the line L1 calibrated according to the individual object distance (i.e., the slope of the line L1) is actually used, the driving current calculated from the actual phase difference of the lens at the position B (i.e., the abscissa of the point a) has a large error from the actual driving current (i.e., the abscissa of the point B). Taking the ordinate as the motor stroke as an example, when the driving current input to the motor is the current value of the abscissa of the point a, the motor stroke is actually the ordinate of the point C, not the ordinate corresponding to the current value of the abscissa of the point B. The camera module considers that the current value of the abscissa of the point A is input, the motor stroke is the motor stroke of the ordinate of the point B, and the motor drives the lens to reach the quasi-focus position, so that the calculated out-of-focus distance according to the phase difference is inaccurate. The motor is moved by a corresponding distance according to the calculated defocus distance and then is located far from the actual in-focus position. If the camera module only adopts PDAF focusing, the image is still unclear and the focusing is inaccurate after the lens is pushed to the corresponding position. If the camera module adopts PDAF and CDAF mixed focusing, more contrast focusing is needed to realize focusing subsequently, so that the integral focusing time is greatly increased, and the user experience is seriously influenced.
That is to say, when the camera module adopts the nonlinear optical zoom system, if the DCC is calibrated according to the DCC calibration method of the linear optical system, it is likely that the conventional PDAF focusing fails, an accurate defocus distance cannot be obtained, the motor cannot reach an accurate focus position, or even cannot focus clearly, thereby further increasing the focusing time, seriously affecting the imaging quality, and affecting the user experience.
In order to solve a series of optical problems caused by the fact that a new design material is selected for a camera module and improve user experience, the embodiment of the application provides a PDAF DCC calibration method for a nonlinear optical zoom system. In the DCC calibration process of the nonlinear camera module, the whole section of the nonlinear curve is not only calibrated by using a certain object distance, but also calibrated by using a plurality of object distances to perform DCC calibration on a plurality of sections of linear relations. The sectional calibration DCC mode is helpful to improve the AF focusing accuracy and speed of the whole machine, achieves the purpose of quick focusing, improves the imaging quality and improves the user experience.
For ease of understanding, the DCC calibration method provided herein is described below in conjunction with the camera module 200.
As shown in fig. 10, taking the ordinate as an example of the phase difference, it can be seen from the graph that the nonlinear relationship curve between the phase difference and the current can be approximately divided into two linear relationships, and an appropriate object distance is respectively selected from each linear relationship to perform DCC calibration on the one linear relationship, so as to obtain two calibrated straight lines L2 and L3, where the calibrated DCC includes a slope parameter of the straight line L2 and a slope parameter of the straight line L3. Of course, in order to make the calibrated DCC more accurate, the non-linear curve formed by the phase difference and the current may be approximately divided into three or more segments of linear relationship for DCC calibration. As shown in fig. 11, a non-linear relationship curve between the phase difference and the current may be divided into three linear relationships, and each linear relationship may be calibrated by selecting an appropriate object distance to obtain three calibrated lines L4, L5, and L6, where the calibrated lines DCC include a slope parameter of the line L4, a slope parameter of the line L5, and a slope parameter of the line L6.
How to segment and select an appropriate object distance, i.e. a calibration object distance, for DCC calibration in practical application will be described in an exemplary manner with reference to fig. 12 and 13.
Figure 12 shows a schematic flow chart of determining the calibration object distance in the DCC calibration process.
In step S710, a relationship curve between a motor stroke and an object distance of the non-linear camera module is obtained.
Before selecting a proper object distance for calibration, a relation curve graph between the motor stroke and the object distance of the nonlinear camera module can be obtained, and for the nonlinear camera module, the relation curve between the motor stroke and the object distance is nonlinear.
Fig. 13 is a graph illustrating a motor stroke versus an object distance of a non-linear camera module. In the embodiment of the present application, a liquid lens is taken as an example for the camera module, wherein the abscissa and the ordinate of a certain point in the curve represent the motor stroke when the object to be photographed is imaged clearly at a certain object distance. It should be understood that the motor stroke when the subject is imaged clearly at each object distance corresponds to one current value, and thus fig. 13 can also be used to describe the non-linear relationship between the motor stroke and the driving current. Or fig. 8 can be derived to fig. 13, that is, fig. 8 can be used to express the non-linear relationship between the motor stroke and the current, and the object under a certain object distance can be imaged clearly for each current and motor stroke, so that the abscissa in fig. 8 can also be converted to the object distance, thereby obtaining fig. 13. From fig. 13, it can be preliminarily determined that the non-linear relationship between the motor stroke and the object distance can be approximately divided into two linear relationships for DCC calibration. It should be understood that the non-linear relationship in the drawings can also be divided into three-segment, four-segment or more linear relationships, and the embodiments of the present application are only exemplified by the division into two-segment linear relationships.
In step S720, a plurality of test object distances are determined according to a relationship between the motor stroke and the object distance.
In which a plurality of test object distances are selected from far focus to macro. The plurality of non-calibrated object distances cover an object distance interval between a far focus and a micro distance of the camera module.
Specifically, for example, a plurality of test object distances may be selected by dividing a plurality of distances from the far focus inf to the macro according to the graph of the relationship between the motor stroke and the object distance shown in fig. 13.
It should be understood that the selection of multiple test object distances may be performed according to a certain principle, for example, the distance between the far focus and the macro may be selected, or the relationship between the motor stroke and the object distance may be roughly divided into a plurality of linear relationships, for example, two, three, four or more, according to the relationship graph of the motor stroke and the object distance, and a plurality of object distances may be selected on each linear relationship.
Illustratively, in the embodiment of the application, the relationship between the motor stroke and the object distance is divided into two linear relationships according to the relationship graph between the motor stroke and the object distance, and the object distances at two ends and a plurality of object distances in the middle of each linear relationship are selected, for example, the object distance such as 100cm/50cm/20cm/10cm/5cm/2.5cm is selected.
In step S730, the motor is driven to move according to a preset step length at each of the plurality of test object distances, and a phase difference and a corresponding code value of the picture of the motor at each step are obtained.
Specifically, for example, at each test object distance, the driving motor moves in a preset step, the motor moves to a position, a picture of the object is taken when the motor is located at the position, and the phase difference of the picture, that is, the phase difference of the picture when the motor is located at the position, is calculated.
Before step S730, the quasi-focus code values of the far focus and the macro may be obtained and burned into the camera module. The quasi-focus code values of the far focus and the micro distance are burnt to determine an object distance calibration interval of the DCC, and the calibrated DCC can be used when PDAF is carried out at the object distance between the far focus and the micro distance.
Therefore, before step S730, the code value corresponding to the far focus and the code value corresponding to the macro burned by the camera module are read. The code values corresponding to the far focus and the micro distance are used for determining the code value corresponding to the initial position in the movement process of the motor and the code value corresponding to each step of the motor.
The following description will be given with reference to specific examples. Firstly, reading out the burnt quasi-focus code values of far focus and micro distance, then driving a motor of a camera module to move by a preset step length (step), and respectively shooting pictures at the selected test object distance to obtain the phase difference of the pictures at each test object distance in a plurality of test object distances and the corresponding code value data.
For example, when the object distance is 100cm, the pictures of the shot scenery are sequentially shot with the preset step length of 5 codes, the phase difference of the shot pictures and the code value corresponding to the phase difference are calculated, and finally, the relation curve of the phase difference and the code value when the object distance is 100cm is obtained. Specifically, assuming that the inf code reading is 100, the macro code reading is 600, and the step size is 50, at an object distance of 100cm, the motor moves to the positions corresponding to 11 codes of 100, 150, 200, 250, … …, 550, and 600, respectively, and takes a picture at each position, and then obtains the phase difference of the 11 pictures and the corresponding code value. Then when the object distance is 50cm, shooting the pictures of the shot scenery in sequence by a preset step length, calculating the phase difference of the shot pictures and the code value corresponding to the phase difference, and finally obtaining the relation curve of the phase difference and the code value when the object distance is 50 cm. By analogy, a graph of the relationship between the phase difference and the code value at the selected multiple test object distances is finally obtained, as shown in fig. 14. It should be understood that the abscissa in fig. 14 is a code value, the ordinate is a phase difference, the step size is 5 codes, and the values of the abscissa and the ordinate in the figure are only exemplary and do not limit the embodiments of the present application in any way.
As can be seen from fig. 14, the phase difference and the code value (which may be replaced by a current) at different object distances show different relationship curves (the slope of the phase difference and the slope of the code value at different object distances in the linear camera module are substantially the same), and the relationship curve at the same object distance is segmented, so the DCC calibration method at a single object distance is not applicable.
In step S740, a plurality of calibration object distances are determined according to the trend of the relationship curve between the phase difference and the code value of the plurality of test object distances.
Since the relationship curve between the phase differences of the multiple test object distances and the code values may be consistent and linear in a certain stroke interval (such as a certain code interval) of the motor stroke, in this step, multiple calibration object distances may be determined according to the relationship between the phase differences of the multiple test object distances and the trend of the relationship curve of the code values. For example, one test object distance can be selected as the calibration object distance from the test object distances with consistent linearity of the relation curve of the phase difference and the code value.
Referring to fig. 14, as seen from the phase difference test data curve in fig. 14, when the object distance is 100cm/50cm/20cm/10cm, the trend of the relationship between the phase difference and the code value (which may be replaced by current) is substantially consistent and linear within the first two thirds of the motor stroke (i.e. the effective focusing stroke of the camera module); at the object distance of 5cm/2.5cm, the relation trend of the phase difference and the code value (which can be replaced by current) is basically kept consistent and linear in the latter half of the motor stroke. Therefore, 20cm is selected as a calibration object distance from the test object distance of 100cm/50cm/20cm/10cm, and 5cm is selected as a calibration object distance from the test object distance of 5cm/2.5 cm. Thus, for the embodiment provided by the present application, two calibration object distances are determined, 20cm and 5cm respectively, with which the calibration can cover the entire stroke of the motor. For example, the first two thirds of the entire motor stroke may be covered with an object distance of 20cm, the second third of the entire motor stroke may be covered with an object distance of 5cm, or the motor strokes covered with the 20cm and 5cm calibrations may overlap.
Fig. 15 shows a schematic flowchart of a defocus conversion coefficient calibration method provided in an embodiment of the present application.
The defocusing conversion coefficient calibration method provided by the embodiment of the application is applied to the nonlinear camera module. The nonlinear camera module meets any one or more of the following conditions: the motor stroke of the nonlinear camera module is in a nonlinear relation with the driving current; the focal length of the nonlinear camera module is in a nonlinear relation with the driving current; the image distance of the nonlinear camera module is in a nonlinear relation with the driving current; the phase difference of the nonlinear camera module is in nonlinear relation with the driving current.
In the embodiment of the application, the defocusing conversion coefficient is calibrated by adopting a plurality of object distances. Thus, in step S810, a plurality of calibration object distances is determined. The manner of determining the plurality of calibration object distances is shown in fig. 12, and the related description is given above and will not be repeated.
In step S820, the effective focusing stroke of the nonlinear camera module is divided into multiple equal parts, and each equal part is reached by the motor of the nonlinear camera module in one step.
The effective focusing stroke of the nonlinear camera module can be understood as the focusing stroke of the motor from far focus to micro distance. For example, the effective focusing stroke of the non-linear camera module can be divided into 10 equal parts or 13 equal parts, each equal part corresponds to one step of the motor of the non-linear camera module, namely, the motor moves 10 steps or 13 steps correspondingly.
In step S830, at each of the plurality of calibration object distances, a phase difference and a corresponding code value of a picture of a motor of the non-linear camera module at each step are acquired.
Before this step, the code value corresponding to the far focus and the code value corresponding to the macro burned by the camera module can be read. The code values corresponding to the far focus and the micro distance are used for determining the code value corresponding to the initial position in the movement process of the motor and the code value corresponding to each step of the motor.
In step S840, linear fitting is performed on the two-dimensional data composed of the phase difference and the code value obtained at each calibrated object distance to obtain a slope parameter of a linear relationship corresponding to each calibrated object distance, where the slope parameters of the linear relationships corresponding to at least two calibrated object distances of the plurality of calibrated object distances are different.
For each calibrated object distance, the motor moves according to the divided equal parts, the code value of the micro distance and the code value of the far focus are read, the dividing step number is known, and thus the phase difference and the corresponding code value of the picture at the position are obtained by the motor at each step. And performing linear fitting on the obtained two-dimensional data consisting of the phase difference and the code value to obtain a slope parameter of a linear relation corresponding to the calibrated object distance. It should be understood that, when the phase difference and the code value at each calibration object distance are linearly fitted in the embodiment of the present application, part of the obtained phase difference and code value may be selected for fitting, or all of the obtained phase difference and code value may be selected for fitting. And sequentially processing under all the calibration object distances according to the mode to obtain the linear relation slope parameters corresponding to each calibration object distance, wherein at least two linear relation slope parameters corresponding to the calibration object distances are different.
In step S850, the slope parameter of the linear relationship corresponding to the plurality of calibration object distances is determined as the defocus conversion coefficient of the nonlinear camera module.
The above-mentioned defocus conversion coefficient is a conversion coefficient for converting the phase difference into the motor displacement offset, and in the prior art, the defocus conversion coefficient includes one slope parameter, and in the embodiment of the present application, the defocus conversion coefficient includes a plurality of slope parameters in a linear relationship.
For ease of understanding, the following description will be made with reference to the specific embodiment shown in fig. 13. Firstly, under a test environment with a calibration object distance of 20cm, dividing the whole stroke of the motor (the effective focusing stroke of the camera module) into a plurality of equal parts, for example, 10 equal parts or 13 equal parts, and correspondingly moving the motor for 10 steps or 13 steps. And taking pictures of the shot scene at the time of the distance of the calibrated object of 20cm at each step, and calculating the phase difference. Code values of far focus and micro distance are known, the number of divided steps is known, the code value corresponding to each step is known, so that the phase difference and the corresponding code value under the condition of calibrating object distance of 20cm can be obtained, curve fitting is carried out on the series of data, and the linear relation curve of the fitted phase difference and the code value is obtained. The DCC calibration was completed at a calibration object distance of 20 cm. And then repeating the above process under the test environment with the calibration object distance of 5cm to finish the DCC calibration with the calibration object distance of 5 cm. And the slope parameter of the linear relation obtained when the distance between the calibration objects is 20cm and the slope parameter of the linear relation obtained when the distance between the calibration objects is 5cm are the calibrated DCC of the nonlinear camera module.
It should be noted that the calibration process may be performed by a calibration library storing a software program.
Optionally, in the defocus transform coefficient calibration process, the method further includes: and determining an object distance applicable region or a code value applicable region of a linear relation corresponding to each of the plurality of calibrated object distances. In this way, when the PDAF is performed at a certain object distance, the slope parameter of which linear relation is used for conversion can be determined according to the corresponding application interval.
After DCC is calibrated, the method for calibrating a defocus conversion coefficient provided in the embodiment of the present application further includes: and verifying the defocusing conversion coefficient under a plurality of non-calibrated object distances.
Fig. 16 shows a schematic flowchart of a defocus transformation coefficient verification method provided in an embodiment of the present application.
As shown, in step S910, a plurality of non-calibrated object distances are determined according to the relationship curve between the motor stroke and the object distance.
Preferably, the plurality of non-calibrated object distances cover an object distance interval between a far focus and a micro distance of the camera module.
In step S920, a motor of the non-linear camera module is set at an initial position at each of the plurality of non-calibrated object distances.
The step can be repeatedly executed for a plurality of times, and the initial position is selected to cover the full stroke of the motor of the camera module.
In step S930, the phase difference of the picture when the motor is located at the initial position is obtained, and the first focus position of the motor is obtained by performing the primary phase detection auto-focus according to the phase difference of the picture at the initial position and the defocus conversion coefficient.
This step can be considered as one phase detection autofocus after a given initial position of the motor. In the phase detection automatic focusing process, the phase difference of the picture at the initial position needs to be converted into the motor displacement offset, namely the out-of-focus distance, by utilizing the out-of-focus conversion coefficient, and then the motor is given an instruction to move according to the out-of-focus distance, so that the picture is moved to the first in-focus position. The first in-focus position of the motor may be understood as a position at which the motor is located when obtaining a sharp image calculated by the camera module in the PDAF.
In step S940, the motor of the nonlinear camera module is set at the initial position, and contrast detection autofocus is performed to obtain a second focus position of the motor.
This step can be considered as one contrast detection autofocus given the same initial position of the motor. In the embodiment of the present application, it is considered that the motor can reach the actual in-focus position when the contrast detection autofocus is performed, and the motor is taken as a standard reference.
In step S950, it is determined whether the position difference value of the first and second focus positions satisfies a preset error.
In the embodiment of the present application, the first focus position and the second focus position may be represented by actual positions of the focus or the motor, such that a position difference between the first focus position and the second focus position is represented by a distance, and a unit of the preset error is also a unit of the distance, such as micrometers. The first and second focus positions may also be represented by drive currents or code values, such that a difference in position between the first and second focus positions is represented by a current difference or code difference, and the unit of the predetermined error corresponds to the unit of the current or code value, e.g., milliamps.
For example, in the embodiment of the present application, when comparing the position difference value between the first focus position and the second focus position, the code value corresponding to the first focus position and the code value corresponding to the second focus position are compared.
Therefore, after DCC calibration is completed, verification is carried out under the condition of non-calibrated object distance, and the rationality of the calibration library and the calibrated object distance can be verified. That is, a plurality of non-calibration object distances are randomly selected, and the plurality of non-calibration object distances can cover the calibrated multi-segment linear relationship, namely, the calibrated multi-segment linear relationship can be verified.
An exemplary embodiment of the present application selects a non-calibration object distance of 30cm/4cm/3 cm. Taking the non-calibrated object distance as 30cm as an example, two initial code values are set first, and the phase difference directions corresponding to the two initial code values are opposite. And according to the initial code value, the motor drives the lens to reach a position corresponding to the initial code value, a picture of the shot scene at the position is shot, and the phase difference is calculated. The camera module may obtain the defocus distance according to the calculated phase difference and the calibrated DCC obtained in step S850, where the defocus distance is described as a code value for convenience of verification, that is, a code value (corresponding to an incremental code value) corresponding to a distance that the motor needs to move to the quasi-focus position may be obtained according to the phase difference and the calibrated DCC. And adding the calculated defocus distance (incremental code value) and the initial code value to obtain an actual code value corresponding to the position actually reached by the motor.
In addition, the CDAF is adopted to carry out focusing under the condition that the distance between non-calibration objects is 30cm, and a code value of the position where the CDAF is positioned after focusing is finished is obtained. And comparing the actual code value corresponding to the position reached by the motor in the PDAF process with the code value corresponding to the in-focus position where the motor finishes focusing in the contrast focusing CDAF process. The error between the two is less than ± 120dac (i.e. 12 bits), which is considered to be satisfied. The verification process is performed at the non-calibration object distances of 4cm and 3cm, and the description is omitted.
Because the multi-segment linear relation is calibrated, in the verification process, an object distance interval, a phase difference interval or a code value interval and the like applicable to each segment of linear relation can be determined. When an initial code value of a motor is given, a corresponding DCC may be selected for verification according to an interval in which the initial code value is located.
It should be noted that the verification process may be performed by a calibration library storing a software program. The calibration library stores calibrated DCCs corresponding to multiple linear relations, wherein each linear relation can set a code interval suitable for the DCCs.
Figure 17 shows the linearity of the DCC calibrated at the calibration object distance. As can be seen from the figure, when the calibration distance is 20cm, the linearity of the calibrated DCC is about 0.94-0.98; when the calibration distance is 5cm, the linearity of the calibrated DCC is about 0.85-0.95.
Fig. 18 shows the initial code values given at non-nominal object distances. The initial code values given at each non-nominal object distance cover the entire stroke of the motor.
Figure 19 shows the code error for validation at non-nominal object distances. As can be seen from the figure, the code error verified under the non-calibrated object distance of 30cm/4cm/3cm is within +/-120, the requirement is met, and the two-stage calibration verification is feasible.
After the DCC calibration of the nonlinear camera module is completed, the calibrated and verified DCC can be used for the nonlinear camera module to perform PDAF.
Fig. 20 shows a schematic flowchart of a phase detection autofocus method provided in an embodiment of the present application. The phase detection automatic focusing method provided by the embodiment of the application is applied to the nonlinear camera module.
As shown in the figure, in step S1010, a phase difference of the picture when the motor of the non-linear camera module is located at the current position is obtained.
In step S1020, the defocus distance is calculated from the phase difference and a defocus conversion coefficient, which includes slope parameters of a plurality of linear relationships.
Optionally, the slope parameters of the plurality of linear relationships have an object distance applicable region or a code value applicable region, in this step, a slope parameter of one linear relationship may be selected from the slope parameters of the plurality of linear relationships according to the object distance applicable region or the code value applicable region, and then the defocus distance may be determined according to the phase difference and the selected slope parameter of the linear relationship.
In step S1030, the motor of the nonlinear camera module moves to the in-focus position according to the command corresponding to the defocus distance.
The above description only describes the differences between the phase detection autofocus method provided by the embodiments of the present application and the existing PDAF, and other parts not described in detail can refer to the related description above.
When using the calibration library for PDAF, the specific procedure may be as follows:
acquiring the phase difference of the pictures when the motor is positioned at the current position;
inputting the phase difference into a calibration library, and calculating by using the calibration library and DCC data to obtain the focusing distance;
and issuing a corresponding action instruction to the motor according to the defocusing distance to finish focusing.
In the embodiment of the application, a plurality of calibrated object distances are utilized to calibrate the defocusing conversion coefficient instead of a single object distance, slope parameters of a plurality of linear relations can be obtained, thus when the nonlinear camera module is subjected to phase detection automatic focusing, a proper slope parameter can be selected to convert the phase difference into the motor displacement offset, the position where the motor arrives is closer to the focusing position, the precision of the nonlinear camera module for carrying out the phase detection automatic focusing can be improved, the rapid and accurate focusing is realized, and the user experience is improved. When the hybrid focusing combining the phase detection automatic focusing and the contrast detection automatic focusing is adopted, the focusing time of the contrast detection automatic focusing can be reduced, and the rapid focusing is realized.
It should be understood that the above embodiments only take the case where the camera module employs a liquid lens as an example for description of relevant contents. In some embodiments of the present application, other camera module structures are also suitable for the above method.
For example, the camera module adopts a variable-focus lens such as a liquid crystal lens, a liquid lens, an adjustable lens and the like, the focal length of the lens and the current are in a nonlinear relationship, and the method for calibrating the DCC is also suitable for the camera module.
For another example, the camera module uses a deformable mirror or a prism, the focal length of the lens and the current are in a nonlinear relationship, and the above method for calibrating the DCC is also applicable to the camera module.
For another example, in a motor system with nonlinear motion adopted by the camera module, the motor stroke and the driving current are in a nonlinear relationship, and the above method for calibrating the DCC is also applicable to the camera module.
It should be noted that the motor stroke in the embodiment of the present application may be understood as a stroke that the motor travels from the initial motor position to the motor position corresponding to the drive current. Each drive current corresponds to a motor stroke, each motor stroke being calculated from the same initial motor position.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. The utility model provides a out of focus conversion coefficient calibration method which characterized in that, is applied to nonlinear camera module, includes:
determining a plurality of calibration object distances;
dividing the effective focusing stroke of the nonlinear camera module into a plurality of equal parts, wherein each equal part is reached by the one-step action of a motor of the nonlinear camera module;
acquiring the phase difference of a picture of a motor of the nonlinear camera module at each step and a corresponding motor stroke or an indication code value of corresponding current of the motor stroke under each of the plurality of calibration object distances;
performing linear fitting on the two-dimensional data consisting of the phase difference and the code value obtained under each calibrated object distance to obtain slope parameters of linear relations corresponding to each calibrated object distance, wherein the slope parameters of the linear relations corresponding to at least two calibrated object distances in the plurality of calibrated object distances are different;
and determining the slope parameters of the linear relations corresponding to the plurality of calibration object distances as the out-of-focus conversion coefficients of the nonlinear camera module.
2. The calibration method according to claim 1, wherein said determining a plurality of calibration object distances comprises:
acquiring a relation curve between the motor stroke and the object distance of the nonlinear camera module, wherein the relation curve between the motor stroke and the object distance is nonlinear;
determining a plurality of test object distances according to a relation curve between the motor stroke and the object distances;
driving the motor to move according to a preset step length under each test object distance of the plurality of test object distances, and acquiring the phase difference and the corresponding code value of the picture of the motor at each step;
and determining the plurality of calibrated object distances according to the trend of the relation curve between the phase difference and the code value of the plurality of test object distances.
3. The calibration method according to claim 2, wherein determining the plurality of calibration object distances according to the trend of the relationship curve of the phase differences of the plurality of test object distances and the code values comprises:
and selecting one test object distance as a calibration object distance from the test object distances with consistent linearity of the relation curves of the phase difference and the code value.
4. The calibration method according to claim 2 or 3, further comprising:
and reading a code value corresponding to a far focus and a code value corresponding to a micro-distance which are burned in the nonlinear camera module.
5. The calibration method according to any one of claims 1 to 4, further comprising:
and determining an object distance applicable region or a code value applicable region of the linear relation corresponding to each of the plurality of calibration object distances.
6. The calibration method according to any one of claims 1 to 5, further comprising:
and verifying the defocusing conversion coefficient under a plurality of non-calibrated object distances.
7. The calibration method according to claim 6, wherein the verifying the defocus conversion coefficients at a plurality of non-calibration object distances comprises:
determining a plurality of non-calibrated object distances according to a relation curve between the motor stroke and the object distance;
setting a motor of the non-linear camera module at an initial position at each of the plurality of non-calibrated object distances;
acquiring the phase difference of the motor at the initial position, and performing primary phase detection automatic focusing according to the phase difference of the initial position and the defocusing conversion coefficient to acquire a first focusing position of the motor;
setting a motor of the nonlinear camera module at the initial position, and performing contrast detection automatic focusing to obtain a second focusing position of the motor;
determining whether a difference in position between the first focus position and the second focus position satisfies a preset error.
8. The calibration method according to claim 7, wherein the plurality of non-calibration object distances cover an object distance interval between a far focus and a macro of the camera module.
9. The calibration method according to claim 7 or 8, wherein the initial position covers a full motor stroke of the camera module.
10. A calibration method according to any one of claims 1 to 9, wherein the non-linear camera module satisfies any one or more of the following conditions:
the motor stroke of the nonlinear camera module is in a nonlinear relation with the driving current;
the focal length of the nonlinear camera module is in a nonlinear relation with the driving current;
the image distance of the nonlinear camera module is in a nonlinear relation with the driving current;
the phase difference of the nonlinear camera module is in nonlinear relation with the driving current.
11. The calibration method according to any one of claims 1 to 10, wherein the non-linear camera module comprises one or more of the following lenses:
liquid lens, liquid crystal lens, adjustable lens, deformable mirror, deformable prism.
12. The utility model provides a phase detection auto focus method which characterized in that, is applied to nonlinear camera module, includes:
acquiring the phase difference of pictures when a motor of the nonlinear camera module is positioned at the current position;
calculating an out-of-focus distance according to the phase difference and an out-of-focus conversion coefficient, wherein the out-of-focus conversion coefficient comprises slope parameters of a plurality of linear relations;
and the motor of the nonlinear camera module moves to an in-focus position according to the instruction corresponding to the out-of-focus distance.
13. The method of claim 12, wherein the slope parameters of the plurality of linear relationships have an object distance applicable interval or a code value applicable interval,
the calculating the defocus distance according to the phase difference and the defocus conversion coefficient comprises:
selecting a slope parameter of a linear relation from the slope parameters of the linear relations according to the object distance applicable interval or the code value applicable interval;
and determining the defocusing distance according to the phase difference and the slope parameter of the selected linear relation.
14. The method of claim 12 or 13, wherein the non-linear camera module satisfies any one or more of the following conditions:
the motor stroke of the nonlinear camera module is in a nonlinear relation with the driving current;
the focal length of the nonlinear camera module is in a nonlinear relation with the driving current;
the image distance of the nonlinear camera module is in a nonlinear relation with the driving current;
the phase difference of the nonlinear camera module is in nonlinear relation with the driving current.
15. The method of any one of claims 12 to 14, wherein the non-linear camera module comprises one or more of the following lenses:
liquid lens, liquid crystal lens, adjustable lens, deformable mirror, deformable prism.
16. A camera module, characterized in that the defocus conversion coefficient calibration method according to any one of claims 1 to 11 is applied.
17. A camera module, characterized in that the phase detection auto-focusing method according to any one of claims 12 to 15 is applied.
18. An electronic device, characterized by comprising the camera module according to claim 16 or 17.
CN201911320430.0A 2019-12-19 2019-12-19 Defocus conversion coefficient calibration method, PDAF method and camera module Active CN113014790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911320430.0A CN113014790B (en) 2019-12-19 2019-12-19 Defocus conversion coefficient calibration method, PDAF method and camera module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911320430.0A CN113014790B (en) 2019-12-19 2019-12-19 Defocus conversion coefficient calibration method, PDAF method and camera module

Publications (2)

Publication Number Publication Date
CN113014790A true CN113014790A (en) 2021-06-22
CN113014790B CN113014790B (en) 2022-06-10

Family

ID=76382266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911320430.0A Active CN113014790B (en) 2019-12-19 2019-12-19 Defocus conversion coefficient calibration method, PDAF method and camera module

Country Status (1)

Country Link
CN (1) CN113014790B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727093A (en) * 2021-08-23 2021-11-30 重庆市天实精工科技有限公司 Method for burning DAC (digital-to-analog converter) by USB (universal serial bus) camera
CN116392058A (en) * 2023-04-18 2023-07-07 极限人工智能有限公司 Automatic focusing method and system for electronic endoscope
WO2023226548A1 (en) * 2022-05-25 2023-11-30 惠州Tcl移动通信有限公司 Lens focusing method and apparatus, and electronic device and computer-readable storage medium
CN118042110A (en) * 2024-03-06 2024-05-14 荣耀终端有限公司 Focusing evaluation method and electronic equipment
CN118301474A (en) * 2024-06-03 2024-07-05 浙江大华技术股份有限公司 Image forming method, system, electronic device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5497209A (en) * 1993-12-28 1996-03-05 Nikon Corporation Automatic focusing device and automatic focusing method
JP2008102537A (en) * 2007-11-08 2008-05-01 Nikon Corp Camera equipped with focus detecting device
CN102089696A (en) * 2008-07-15 2011-06-08 佳能株式会社 Focal point adjusting apparatus, image-taking apparatus, interchangeable lens, conversion coefficient calibrating method, and conversion coefficient calibrating program
CN103676082A (en) * 2012-09-11 2014-03-26 索尼公司 Focus detection device, imaging apparatus, and method of controlling focus detection device
CN105391932A (en) * 2014-08-20 2016-03-09 佳能株式会社 Image processing apparatus, image processing apparatus control method, image pickup apparatus, and image pickup apparatus control method
CN106556960A (en) * 2015-09-29 2017-04-05 宁波舜宇光电信息有限公司 Out of focus conversion coefficient verification method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5497209A (en) * 1993-12-28 1996-03-05 Nikon Corporation Automatic focusing device and automatic focusing method
JP2008102537A (en) * 2007-11-08 2008-05-01 Nikon Corp Camera equipped with focus detecting device
CN102089696A (en) * 2008-07-15 2011-06-08 佳能株式会社 Focal point adjusting apparatus, image-taking apparatus, interchangeable lens, conversion coefficient calibrating method, and conversion coefficient calibrating program
CN103676082A (en) * 2012-09-11 2014-03-26 索尼公司 Focus detection device, imaging apparatus, and method of controlling focus detection device
CN105391932A (en) * 2014-08-20 2016-03-09 佳能株式会社 Image processing apparatus, image processing apparatus control method, image pickup apparatus, and image pickup apparatus control method
CN106556960A (en) * 2015-09-29 2017-04-05 宁波舜宇光电信息有限公司 Out of focus conversion coefficient verification method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727093A (en) * 2021-08-23 2021-11-30 重庆市天实精工科技有限公司 Method for burning DAC (digital-to-analog converter) by USB (universal serial bus) camera
WO2023226548A1 (en) * 2022-05-25 2023-11-30 惠州Tcl移动通信有限公司 Lens focusing method and apparatus, and electronic device and computer-readable storage medium
CN116392058A (en) * 2023-04-18 2023-07-07 极限人工智能有限公司 Automatic focusing method and system for electronic endoscope
CN116392058B (en) * 2023-04-18 2024-08-13 极限人工智能有限公司 Automatic focusing method and system for electronic endoscope
CN118042110A (en) * 2024-03-06 2024-05-14 荣耀终端有限公司 Focusing evaluation method and electronic equipment
CN118301474A (en) * 2024-06-03 2024-07-05 浙江大华技术股份有限公司 Image forming method, system, electronic device and computer readable storage medium
CN118301474B (en) * 2024-06-03 2024-08-30 浙江大华技术股份有限公司 Image forming method, system, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN113014790B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN113014790B (en) Defocus conversion coefficient calibration method, PDAF method and camera module
US8400554B2 (en) Lens barrel and imaging device
US8339503B2 (en) Lens barrel and imaging device
US20180007253A1 (en) Focus detection apparatus, focus control apparatus, image capturing apparatus, focus detection method, and storage medium
CN112740650B (en) Image pickup apparatus
US8619161B2 (en) Lens barrel and imaging device
EP2993506B1 (en) Interchangeable lens apparatus, image capturing apparatus and system
US8848096B2 (en) Image-pickup apparatus and control method therefor
US20220187509A1 (en) Enhanced imaging device using liquid lens, embedded digital signal processor, and software
JP2017187693A (en) Image blur correction device and control method thereof, program, and storage medium
KR20210151464A (en) Camera module
CN115561881A (en) Camera module and electronic equipment
CN112394536A (en) Optical anti-shake device and control method
US10088654B2 (en) Lens device and correction method for lens device
JP2019120886A (en) Image blur correction device and method for controlling the same
CN217406638U (en) Camera module and electronic device
KR20020008194A (en) Targetable autofocus system
US12022195B2 (en) Camera device calibration method using samplingpoints and camera module
Galaom Integration of a MEMS-based Autofocus Actuator into a Smartphone Camera
CN220252271U (en) Long-focus lens module and electronic equipment
KR102323136B1 (en) 3D EDOF scanning apparatus adapting flow scan
CN115128795B (en) Lens assembly and electronic equipment
Gutierrez et al. Auto-focus technology
JP2017134322A (en) Lens device
US20200073079A1 (en) Optical apparatus, control method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant