CN118042110A - Focusing evaluation method and electronic equipment - Google Patents

Focusing evaluation method and electronic equipment Download PDF

Info

Publication number
CN118042110A
CN118042110A CN202410252387.3A CN202410252387A CN118042110A CN 118042110 A CN118042110 A CN 118042110A CN 202410252387 A CN202410252387 A CN 202410252387A CN 118042110 A CN118042110 A CN 118042110A
Authority
CN
China
Prior art keywords
image
frame
camera module
value
motor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410252387.3A
Other languages
Chinese (zh)
Inventor
吴东海
眭新雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202410252387.3A priority Critical patent/CN118042110A/en
Publication of CN118042110A publication Critical patent/CN118042110A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

The application discloses a focusing evaluation method and electronic equipment. Electronic equipment includes first module of making a video recording, includes: the electronic equipment acquires a first PD value of an X-frame second image; the electronic equipment calculates an evaluation chart based on the first PD value and the motor position corresponding to each frame when the second image of the X frame is acquired, wherein the evaluation chart is used for evaluating the focusing performance of the first camera module; wherein the second image is a partial image after the first image is divided; the X frame second images are respectively positioned at the same positions of the X frame first images; the first image of the X frame is a PD raw image which is sequentially acquired by a first camera module when a motor pushes X times of focusing with a first motor step length; the motor position of the X frame second image is the motor position after X times of pushing the motor; x is an integer greater than 1. In the embodiment of the application, the focusing performance of the camera module can be accurately evaluated.

Description

Focusing evaluation method and electronic equipment
Technical Field
The present application relates to the field of image capturing technologies, and in particular, to a focusing evaluation method and an electronic device.
Background
And in the photographing process of the electronic equipment, focusing is performed under the condition of sufficient light. In a phase detection focusing (PDAF) method, an electronic device can easily determine an accurate focusing position, and a focusing result is good. However, the shot environment has insufficient light, PDAF cannot be focused accurately, the noise of the shot image is more, the quality of the original image is poor, the original image cannot be processed through software in the later period, the shot image result of a user is poor, and the shooting experience is poor.
Disclosure of Invention
The embodiment of the application provides a focusing evaluation method and electronic equipment, which can accurately evaluate the focusing performance of a camera module.
In a first aspect, an embodiment of the present application provides a focus evaluation method, where the method is applied to an electronic device, and the electronic device includes a first camera module, and the method includes: the electronic equipment acquires a first PD value of an X-frame second image; the electronic equipment calculates an evaluation chart based on the first PD value and the motor position corresponding to each frame when the second X-frame image is acquired, wherein the evaluation chart is used for evaluating the focusing performance of the first camera module; wherein the second image is a partial image after the first image is divided; the X frame second images are respectively positioned at the same positions of the X frame first images; the first image of the X frame is a PD raw image which is sequentially acquired by the first camera module when the motor pushes X times of focusing with the step length of the first motor; the motor position of the X frame second image is the motor position after the X times of pushing the motor; and X is an integer greater than 1.
The first camera module is placed in a preset shooting environment in the process of acquiring a first image. The value of X is related to the accuracy of the motor in the camera module, and the larger the number of motor scales is, the larger the value of X can be.
In the embodiment of the application, the motor position and the PD value focused in any ambient brightness are linearly changed under the ideal focusing condition. When the lens position is linearly moved (the motor position is linearly changed) as described above, a PD value is obtained, and an evaluation map of the motor position and the PD value is calculated. The evaluation chart can accurately evaluate the focusing performance of the camera module.
In a possible implementation manner, in a case where the evaluation map includes a scatter plot of the motor position corresponding to each frame of the X-frame second image and the first PD value, the method further includes: the electronic equipment carries out linear return fitting on the scatter diagram and draws a fitting line; and the electronic equipment calculates a first error value based on the scatter diagram and the fitting line, wherein the error value is used for evaluating the focusing performance of the first camera module. Thus, in the ideal focus situation, the motor position and PD value in focus in any ambient brightness change linearly. When the lens position (the motor position changes linearly) is linearly moved, the PD value is obtained, and the linear error of the PD value is measured, so that the focusing performance of the camera module can be accurately measured.
The larger the error value is, the worse the focusing performance of the first camera module is.
In one possible embodiment, the error value comprises a root mean square error RMSE and/or a linear regression error. In this way, the root mean square error RMSE is more accurate than the linear regression error measurement.
In one possible implementation manner, the ambient brightness of the first camera module when shooting is 0-10lux. Therefore, under the condition of insufficient shooting ambient light, the examination of the shooting module is larger, and the focusing problem is larger, so that the difference of the acquired error data is more obvious, and the accuracy of focusing performance comparison and the reliability of judgment are ensured.
In a possible implementation manner, when the first camera module is placed in K different environmental brightnesses, the evaluation chart includes K folding lines of the first PD value and a motor position corresponding to each frame of the X-frame second image, where the K folding lines are used to evaluate focusing performance of the first camera module; the K folding lines correspond to the K different environmental brightnesses; and K is an integer greater than 1. Therefore, under different brightness environments, linearity difference and the like can be compared to obtain the advantages and disadvantages of different camera modules, and the accuracy of focusing evaluation and the reliability of evaluation basis are ensured.
In one possible implementation, the K different ambient brightness scenes include at least a dim light scene and a bright light scene; under the dim light scene, the ambient brightness of the first camera module when shooting is 0-3lux; and under the bright scene, the ambient brightness of the first camera module when shooting is greater than 50lux. Therefore, at least the bright light environment and the dark light environment are compared, the obvious difference of linearity is compared, the comparison difference is improved, and the accuracy of evaluation and comparison is ensured.
In one possible embodiment, the evaluation graph includes a first fold line and a second fold line; the first fold line is plotted by the motor position and the first PD value of the second image of the non-normalized X-frame; the second broken line is drawn by the motor position and the first PD value of the normalized X-frame second image; the first folding line and the second folding line are used for evaluating focusing performance of the first camera module. Therefore, for the camera module with high focusing performance, the quality of the image of the electronic equipment hardware is good, the difference is not obvious through normalization processing, and the camera module with poor focusing performance is obvious through normalization processing, so that the reliability and the comprehensive type of focusing evaluation can be ensured, and the scientificity of the basis of the focusing module is selected.
The more consistent the first folding line and the second folding line are, the better the focusing performance of the first camera module is; the larger the difference between the first folding line and the second folding line is, the worse the focusing performance of the first camera module is.
In one possible embodiment, the method further comprises: the electronic equipment acquires the second image of the non-normalized X frame; the electronic equipment splits the second non-normalized image of each frame into a left image and a right image; and the electronic equipment normalizes the left image and the right image respectively based on a Mean-Std normalization algorithm to obtain a normalized X-frame second image. In this way, the second image can be normalized, so that the error value can be reduced, and the quality of the shot image can be improved.
The second image of the un-normalized X frame is an image after the first image is divided.
In one possible implementation manner, after a frame of first image is divided, obtaining m×n frames of second images, where the position coordinates of the second images in the first images are (i, j); the i is an integer from 1 to N, and the j is an integer from 1 to M; the X frame second images are respectively positioned at the same positions of the X frame first images, and the method comprises the following steps: the position coordinates of the X frame second image in the first image are the same. Therefore, the first image can be rapidly and effectively divided, the position of the second image relative to the first image is determined, and convenience and high efficiency of acquiring the evaluation chart are ensured.
In one possible embodiment, the larger the m×n, the larger the error value. Therefore, the M x N can be adjusted to improve the contrast ratio of different camera modules, which can be used as an adjusting basis to ensure the accuracy of the M x N adjustment.
In one possible embodiment, the method further comprises: the electronic equipment adjusts the M and/or the N to obtain M1 and N1; the electronic equipment divides the first image of the X frame based on the M1 and the N1 to obtain a second image of the X1N 1 frame; the X M1X N1 frame second image is used to continue calculating the evaluation map. Thus, the finer the m×n division is, the higher the requirement for focusing details is, and the better the contrast effect is. When the two types of camera modules have better results and cannot compare the focusing quality, the value of M and N can be increased, the contrast is improved, and the comparison accuracy and the evaluation reliability are ensured.
In one possible implementation manner, the object distance when the image capturing module captures the first image is 0.8-1m.
In a second aspect, embodiments of the present application provide an electronic device including one or more processors and one or more memories; the one or more processors are coupled with the one or more memories, the one or more memories being configured to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the focus evaluation method of the first aspect or any of the possible implementations of the first aspect.
In a third aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform a focus evaluation method as described in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform a focus evaluation method as described in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a chip system, which is applied to an electronic device, the chip system including one or more processors configured to invoke computer instructions to cause the electronic device to perform a focus evaluation method as described in the first aspect or any of the possible implementations of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of a camera module according to an embodiment of the present application;
Fig. 2A to fig. 2D are schematic diagrams of a set of dark light defocus scenes according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a video tape Jiao Changjing according to an embodiment of the present application;
FIG. 4A is a schematic diagram illustrating PD linearity of an image sensor under different ambient brightness according to an embodiment of the present application;
fig. 4B is a schematic diagram of a pattern of a photographic subject according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a method for focus evaluation according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first image of an X frame obtained by pushing focus according to a motor step size according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a first image division according to an embodiment of the present application;
fig. 8 is a schematic diagram of a PD raw normalization processing method of a first image according to an embodiment of the present application;
Fig. 9 is a flowchart of a method for calculating a first PD value according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a moving SAD window calculation of a first PD value according to an embodiment of the present application;
FIG. 11 is a schematic diagram showing a linear fitting of first PD values of a second image at different lens positions according to an embodiment of the present application;
FIG. 12 is a partial schematic view of a first image of a PD polyline linear fit and RMSE values in accordance with an embodiment of the present application;
FIGS. 13A and 13B are graphs illustrating motor position versus PD for a set of different ambient brightness levels according to an embodiment of the present application;
FIGS. 14A and 14B are line graphs of motor position and PD with or without normalized X frame second images according to embodiments of the present application;
fig. 15A to 15F are a set of line graphs and RMSE results for two models at m×n of 10×10, 15×15, and 20×20, respectively, according to an embodiment of the present application;
FIG. 16 is a schematic diagram illustrating a simulation method flow of focus evaluation according to an embodiment of the present application;
FIGS. 17A and 17B are diagrams illustrating an evaluation of a simulated set of lens positions and PDs according to an embodiment of the present application;
Fig. 18A to 18E are evaluation diagrams of lens positions and PDs at different PD densities according to an embodiment of the present application;
FIG. 19 is a plot of PD density versus linearity error for an embodiment of the present application;
fig. 20 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this disclosure refers to and encompasses any or all possible combinations of one or more of the listed items.
The embodiment of the application provides a focusing evaluation method and electronic equipment, which can accurately evaluate the focusing performance of a camera module.
Introduction of related technologies of the embodiment of the application:
1: focusing
Focusing, the camera focusing mechanism changes the distance between the lens and the imaging surface (image sensor), so that the imaging process of the photographed object is focusing. Common focus types can be classified into three types, phase focus (phase detection auto focus, PDAF), contrast focus (contrast detection auto focus, CDAF), and laser focus (laser detection auto focus, LDAF).
PDAF is also called phase detection autofocus, which obtains a phase difference (PD value) by means of a mapping relationship between the phase difference and a motor movement direction and a movement distance, and then obtains a focusing position of the motor based on the mapping relationship. Pairs of shielded pixels (shieldpixels) are regularly inserted into the image sensor, and can be used to sense the phase difference of the photographed object picture when the motor is at the current position. The phase difference and the in-focus position of the camera are mapped to each other, and the electronic device can determine the in-focus position of the corresponding motor based on the phase difference and push the motor based on the in-focus position to complete focusing. The in-focus position may include information of the motor movement direction and the movement distance.
The electronic device may determine the target focusing distance by using a driving code value of the motor in the target camera as a movement adjustment precision of the focusing distance, that is, the motor movement direction and the movement distance may be represented by the code value. The code value of the motor may represent a quantized current magnitude, i.e. a magnitude of the drive current. This current level can be translated into a corresponding motor thrust, which can push the camera motor to move. The camera motor is generally fixed on the lens assembly, so that the lens can be pushed to move, thereby changing the image distance. During camera focusing, the electronic device, after determining the code value based on the PD, the motor pushes the corresponding code value, also referred to as the motor stroke. Wherein, the motor stroke and the phase difference and the driving current are all in linear relation. The motor stroke is to be understood in a sense as the magnitude of the drive current.
2: Structure of camera module
Fig. 1 is a schematic view of a camera module according to an embodiment of the present application. As shown in fig. 1, the camera module is used for capturing images, and the camera module can be disposed in an electronic device. In the embodiment of the application, the camera module is an evaluation object of focusing performance.
As shown in fig. 1, the camera module may include a lens, an image sensor (sensor), a motor, and an ISP module. The camera module may also include other modules, which the present application is not limited to.
The lens may receive the light and focus the light onto the image sensor.
The motor (i.e., voice coil motor, voice Coil Montor, VCM) can push the lens so that the image at the image sensor becomes clear to accomplish focusing. The VCM is formed primarily by looking at the ratio between current and motor formation, starting from the start-up current, the current rise is proportional to the distance of travel that can be driven.
The image sensor, i.e. the photosensitive chip, is the core device of the camera module, and can convert optical signals into electrical signals. The RAW image is the original data of the image sensor converting the captured light source signal into a digital signal.
An Image Signal Processor (ISP) mainly can finish the work of digital image processing, and the image format output by the ISP after the processing of the ISP is YUV image, wherein the original data acquired by a sensor is converted into digital signals through AD. ISP can also perform the functions of automatic exposure control, automatic gain control, automatic white balance, color correction, gamma correction, bad point removal and the like, without limitation.
In the PDAF, some of the phase detection pixels are provided in pairs among the pixels included in the image sensor, and the image sensor may be provided with a pair of phase detection pixels (hereinafter referred to as a pair of pixels), one of which performs left-side shielding (LEFT SHIELD) and the other of which performs right-side shielding (RIGHT SHIELD). Only the right-side beam of the imaging beam of the left-side blocked phase detection pixel can image on the photosensitive part (i.e., the non-blocked part) of the phase detection pixel, and only the left-side beam of the imaging beam of the right-side blocked phase detection pixel can image on the photosensitive part of the phase detection pixel. Thus, the imaging beam can be divided into left and right parts, and the phase difference PD can be obtained by comparing the images formed by the left and right imaging beams. After the camera module is exposed once, an original RAW image and a PD RAW image can be obtained.
The Defocus Value is the distance between the imaging plane and the focal plane, i.e. the distance that the upper focal lens needs to move, and the detected phase difference (PHASE DIFFERENCE) can be converted into a Defocus, and the form data applied in this conversion process is called DCC (Defocus conversion coefficient). The DCC data is set in advance for modules of different manufacturers.
3: The following description relates to several algorithms
(1) Absolute error sum algorithm (Sum of Absolute Differences, SAD)
SAD is a primary block matching algorithm for fields in stereo image matching, and the basic operation idea is to calculate the sum of absolute values of differences between pixel values in left and right pixel blocks corresponding to directions.
(2) Mean-Std normalization (also known as zero Mean unit variance normalization)
Mean-Std normalization is a commonly used method of data normalization for processing and converting data to have a specific Mean and standard deviation. This process is typically used in machine learning and statistical analysis to ensure that the data has certain statistical properties in order to better train the model or perform the analysis. The main purpose of Mean-Std normalization is to convert data to a standard normal distribution (also called standard normal distribution or Z distribution) with a Mean (Mean) of 0 and a standard deviation (standard deviation, std) of 1.
The normalization process needs to be calculated based on the mean (mean) and standard deviation of the original dataset, and the normalization formula is: the normalized new data is xi= (Xi-mean)/std.
Where the mean (mean) is the sum of all data points, divided by the total number of data points. Standard deviation represents the degree of dispersion of the data. The standard deviation is the square of the difference between each data point and the mean, then the mean is taken, and finally the square root is taken. I.e. standard deviation formula is
Wherein,For the mean value mean described above, n is the total amount of raw data.
The Mean-Std normalized calculation is easier to compare with other data because they have similar dimensions. Some machine learning algorithms are more sensitive to the normalized data and can improve the performance of the algorithm. In addition, normalization may reduce the effect of outliers (outliers) on the model.
(3) Root mean square error (Root Mean Square Error, RMSE)
RMSE is a common statistical indicator used to measure the error between a model or estimate and an observed value, and is a common indicator used to evaluate the accuracy of a model's predictions or estimates. RMSE can measure the error between the model or estimate and the actual observations by first calculating the error (residual) for each data point and then taking the root mean square value of these errors. The calculation formula of RMSE is:
where y is the actual value of the ith observation, Is an estimate of the ith observation and m represents the total number of data points.
Fig. 2A to fig. 2D are schematic diagrams of an exemplary disclosed set of dark-light defocus scenes according to an embodiment of the present application.
Fig. 2A and 2B are captured images in a captured scene of bright light and dark light, respectively. As shown in fig. 2A, when the brightness of the light is high, the focusing of the face is clear, and the focusing effect is good. As shown in fig. 2B, in the case of darker light, during the shooting process of the electronic device, the face cannot be focused, the face details of the person are lost, the noise in the image is more, the problem of defocus exists, and the shooting effect is poor. Fig. 2C and fig. 2D are schematic diagrams of focusing effects of different photographing modules in a dark scene. The effect of the captured image may be different for different capture modules. As shown in fig. 2C, when the light is darker, the photographing module a can hardly recognize the focusing result in the photographed image, the noise of the image is serious, and the focusing quality of the image is poor. In contrast, as shown in fig. 2D, the shooting module B also has fewer noise points of the focusing image in the shooting image under the condition of darker light, so that the accuracy of image registration is higher, and the focusing quality of the image is better. Since the accuracy of the calculated PD in the dark decreases, and thus the variation of the Defocus is affected, the pull Jiao Xianxiang occurs.
Fig. 3 is a schematic diagram of an exemplary video tape Jiao Changjing according to an embodiment of the present application. The condition of shooting the bellows exists in the video scene, namely the focusing system cannot accurately focus, so that the shooting object of the lens repeatedly changes back and forth to try to focus. In the above-mentioned situation, the shot video frame may have a sudden frame jump, and the shooting experience and the shooting result are poor. In the video recording process, the shooting focus is changed in the mirror transporting process, the focus is in an infinite scene, but the PD value is unstable, so that the fluctuation of the Defocus is large, and unstable focusing is generated.
Based on the problems in the shooting focusing process, the original RAW image is poor in quality, and the image quality cannot be improved through post-processing of the shooting module. That is, when the camera module is selected in the early stage, the focusing performance of the camera module is not sufficiently evaluated, and is influenced by the density of the sensor PD and the ambient brightness, and the PD RAW has larger noise in the dark or normal environment, so that the image quality is difficult to optimize by improving the focusing algorithm. Therefore, starting from the shooting quality of the shooting module, compared with shooting modules of different types, the shooting modules of different types are compared according to the quality of focusing performance, the shooting module with better effect is selected to be put into use, and the shooting picture of the electronic equipment is improved. Therefore, how to evaluate the focusing performance of the shooting module is a problem to be solved in the application.
The application provides a focusing evaluation method and related equipment. The electronic equipment comprises a camera module, and the focusing evaluation result is used for evaluating or comparing the focusing performance of the camera module. The electronic equipment acquires a first PD value of an X-frame second image; the electronic equipment calculates an evaluation chart based on the first PD value and the motor position corresponding to each frame when the second image of the X frame is acquired, wherein the evaluation chart is used for evaluating the focusing performance of the first camera module; wherein the second image is a partial image after the first image is divided; the X frame second images are respectively positioned at the same positions of the X frame first images; the first image of the X frame is a PD raw image which is sequentially acquired by a first camera module when a motor pushes X times of focusing with a first motor step length; the motor position of the X frame second image is the motor position after X times of pushing the motor; x is an integer greater than 1. In the embodiment of the application, the focusing performance can be evaluated through the evaluation chart in the model selection stage of the camera module, and the camera module type with larger noise in the dark or normal environment can be accurately evaluated, so that the camera module can be accurately screened, the camera module can be ensured to be selected, the shooting experience of a user can be improved, and the quality of a shot image can be improved.
Before focusing evaluation is performed on the shooting module, the shooting scene needs to be set according to a first scene parameter of a preset scene. The first scene parameter is a parameter set for testing the focusing performance of the camera.
The first scene parameters may include ambient brightness, a shooting distance, and a shooting subject. The first scene parameter can fix a relevant factor in the corresponding actual shooting scene.
Wherein the ambient brightness of the shots may be different. For example, in a shooting scene, the range of ambient brightness is set to 0lux-10lux (lux).
Fig. 4A is a schematic diagram showing PD linearity of an image sensor at different ambient brightness according to an embodiment of the present application. As shown in fig. 4A, the PD value changes with the linear change of the lens position (lensPos) (which may also be referred to as the motor position) at different ambient brightness. Wherein, the abscissa is the lens position (or motor position), which can be represented by code, lensPos, etc.; and the ordinate is the corresponding PD value under the current lens position. As shown in fig. 4A, the environmental brightness of the 3 PD curves is different, the environmental brightness of the broken line 41 is 200lux, the environmental brightness of the broken line 42 is 10lux, the environmental brightness of the broken line 43 is 1lux, and comparing the 3 broken lines, it can be seen that the lower the environmental brightness is, the poorer the linearity of the PD broken line with the lens position change is. The worse the linearity is, the worse the focusing capability of the camera module can be shown. Therefore, the embodiment of the application selects to evaluate and compare the focusing capability of the camera module in various environmental brightnesses, ensures the accuracy of comparison, and is more visual and easy to compare the focusing performance.
It should be noted that, the PD linearity of the image sensor is a measurable range in linear operation, which is also referred to as "nonlinear error". Linearity is an important index describing the static characteristics of a sensor, and is premised on the measured input quantity being in a stable state. In the application, the linearity can be the degree that the current broken line deviates from the fitting straight line, and the greater the deviation degree is, the worse the linearity is.
The shooting distance is the distance between the camera lens and the shooting object, and can be understood as the shooting object distance. The shooting distance can be set to a position of 1/3 of the far and near focus of the lens of the electronic equipment. For example, the shooting distance is in the range of 0.8 to 1m from the lens.
Further, the electronic device can select the shooting object as a diamond-shaped drawing board (calibration board of a square pattern), the pattern of the shot object is required to be richer, the single pattern is unfavorable for subsequent evaluation, the PD calculated by the image with rich details is more accurate, and the focusing evaluation is also more accurate. Fig. 4B is a schematic diagram of a pattern of a photographic subject, which is exemplary disclosed in an embodiment of the present application, as shown in fig. 4B, and the photographic subject may be a trellis diagram as shown in fig. 4B.
It should be noted that, in order to control the shooting scenes of the shooting modules with various types, the rigor of comparison after focusing evaluation is ensured, and the scene parameters of the shooting scenes of different shooting modules are all set to be the same parameters, namely, the same shooting scene of different shooting modules is ensured.
The following steps are the calculation process before focusing evaluation is carried out on one camera module. All the camera modules to be evaluated need to execute the following steps, and the description of the embodiment of the application is not repeated.
In the set shooting scene, the electronic equipment can carry out focusing evaluation on the shooting modules with different models.
Fig. 5 is a schematic diagram of a method flow of focus evaluation according to an exemplary disclosure of an embodiment of the application. As shown in fig. 5, the evaluation method may include, but is not limited to, the following steps:
S501: the electronic device obtains an X-frame first image of X times of focus pushing with a motor step.
Wherein the first image is a PD RAW image. The first image of the X frames is a PD RAW image obtained by pushing the motor for X times. The motor step size can be mapped into the size of the code moved by pushing the motor each time, the corresponding motor position is reached, the motor step size is a code, and a is a positive integer. The motor full stroke b is a stroke range of the motor, and can represent a focusing range of the motor. The electronic equipment can acquire a frame of first image after pushing the motor once, and X frames of first images are obtained after X times of coke pushing.
Wherein X is the number of times set in advance, and X is an integer greater than 1. X may range from greater than or equal to 100. For example, X is 100, 150, 500, or the like, and is not limited. The more the number of motor scales, the larger the X value which can be set by the electronic equipment, and the X value is related to the self parameter of the camera module. It should be noted that, in order to evaluate the focusing result of different camera modules and reduce the calculation amount, the size of X may be adjusted. If the current advantages and disadvantages can be obviously compared, the value of X can be reduced, and the calculated amount is reduced; if the evaluation results are obviously compared, the X is required to be increased, the distinction of the evaluation results is enhanced, and the reliability of the evaluation results is ensured.
In the case where the motor full stroke b is the product of motor step size and X, i.e. b=a (X-1). If the pushing is sequentially performed in sequence, the motor offset of the pushing between the adjacent X pushing times is uniform. Of course, the above equation does not necessarily hold, and the last push offset may not be the motor step (i.e., the last offset is less than the motor step), which is not limiting in this regard.
Fig. 6 is a schematic diagram of a first image of an X frame acquired by pushing a focus in motor steps according to an embodiment of the present application. As shown in fig. 6, the total stroke of the motor is 0 to (X-1) a-1 codes (the range of the motor position, or the range of the lens position), and the electronic device may push the motor for the first time, then the codes are 0, and obtain the PD RAW of the frame 1; after pushing the motor for the second time, the code is a-1, and PD RAW of the frame 2 is obtained; pushing the motor code for the third time to be 2a-1, and obtaining PD RAW of the frame 3; … …; code is (X-1) a-1 after the X-th push motor, and the PD RAW of frame X is obtained. The above frames 1 to X are the acquired X frame first image.
For example, assuming that the motor full stroke is in the range from 0 to 1023 codes, the motor step size may be set to 8 code, and the motor full stroke may be traversed through 129 movements. Of course, the total stroke of the motor may be 0-399 or 0-799code, etc., which need to be determined according to specific motor parameters, and the embodiment of the application is not limited.
Alternatively, the full motor travel may be indicated by a physical distance. The motor is typically 10um full stroke, 100 (X) total moves, 0.1um per move, 0.1, 0.2, 0.3 … … from far to near
After the motor is pushed each time, the electronic device can acquire a frame of original RAW image and PD RAW image through the sensor, and the frame of PD RAW is the first image. Wherein the PD RAW image is a pixel value (gray or luminance value) on the left or right.
The process of the electronic device obtaining the first image of the X frame in which the focal length is pushed for X times by the motor step length may be that the electronic device controls other devices provided with the camera module to obtain the X frame PD raw after the focal length is pushed for X times; the application is not limited by the method, but the electronic device can also control the camera module installed by the electronic device to obtain X-frame PD raw view after X times of pushing.
S502: and the electronic equipment divides each first image to obtain M.N second images.
The electronic device may preset a plurality of m×n sizes, and select corresponding sizes for division. For example, M is 10, 15, 20, 30, etc., and N is 10, 15, 20, 30, etc. Wherein. M and N are both positive integers of 5-100.
Fig. 7 is a schematic diagram of an exemplary disclosed partitioning of a first image in accordance with an embodiment of the present application. The first image of the previous frame is the k-th image in the first image of the X frames, and k is an integer from 1 to X in turn. As shown in fig. 7, the electronic device may divide the first image into m×n small images (second images). After the first image is divided, each second image has a corresponding position, and the positions of the second images in the first image can be calibrated through matrix coordinates (i, j), wherein i is an integer from 1 to N, and j is an integer from 1 to M. In fig. 7, the second image of the position (i, j) is a black area of the first image.
Wherein, the pixel sizes of the m×n second images may be equal and may be unequal; m and N may be equal or different, and the application is not limited.
Optionally, the electronic device may adjust the values of M and N, assuming that M is adjusted to M1 and/or N is adjusted to N1, after which the electronic device may proceed to perform the process of S502, dividing each frame of the first image into M1 x N1 frames of the second image. And then continues to S503 and S504. If the broken line fluctuation formed by the plurality of camera modules in S504 is too small, the difference of the error values is not large, and the values of M and N can be increased; if the fold line formed by the plurality of camera modules in S504 fluctuates too much, the difference of the error values is not large, so that the values of M and N can be reduced.
In the shooting process, the smaller the focusing frame is, the more easily the focused details are lost, so that focusing errors are caused, and the magnitude of the errors can be used for measuring focusing results, so that the first image is required to be divided, and the accuracy of focusing evaluation is ensured.
S503: the electronic device obtains a first PD value for the X frame second image.
Optionally, before the electronic device acquires the first PD value of the second image, each second image in the first X-frame image is normalized (based on the Mean-Std normalization algorithm), to obtain a normalized second image.
The electronic device performs normalization processing on the second images LEFT PD RAW and RIGHT PD RAW at the first position, so as to obtain normalized second images, and calculates a first PD value of each second image based on the normalized second images.
Fig. 8 is a schematic diagram of a PD raw normalization processing method of a first image according to an exemplary embodiment of the present application.
As shown in fig. 8, PD values of Left and Right in the second image PD raw are generally arranged at intervals of rows or columns, and the second image size is a×b. In a first step, the electronic device may split according to LEFT PD RAW and RIGHT PDRAW to form two PD raw, LEFT PD RAW (left image) and RIGHT PD RAW (right image), respectively. LEFT PD RAW and RIGHT PD RAW are each a/2*b. In the second step, the electronic device may normalize LEFT PD RAW and RIGHT PD RAW to obtain normalized LEFT PD RAW and RIGHT PD RAW, respectively. Wherein a and b are integers greater than 1, respectively, and a is an even number.
In the second step, taking the normalization process of LEFT PD RAW as an example, the pixel mean and standard deviation of LEFT PD RAW are calculated, and the value of each pixel is calculated as Xi, j= (Xi, j-mean)/std based on the pixel mean and standard deviation std of LEFT PD RAW. Where Xi, j is the pixel value of position (i, j) in LEFT PD RAW and Xi, j is the pixel value of position (i, j) in normalized LEFT PD RAW. i is an integer from 1 to a/2 in turn, and j is an integer from 1 to b in turn. Similarly, the normalization process of RIGHT PD RAW may be performed according to the same method as described above, and will not be described in detail.
In the normalization processing process of the second image, the optical noise can be resisted, the noise is reduced, and the reliability of focusing evaluation is improved.
The electronic device calculates the first PD value of the second image in each first image, which may be directly based on the first PD value of the second image, or may be calculated based on the normalized second image. The process of calculating the first PD value is specifically described below.
The electronic device may calculate a first PD value for the second image per frame using a SAD algorithm. Wherein there are a total of X M N frames of second images, each of which calculates a PD value.
The process of calculating the first PD value will be described below taking one frame of the second image as an example. The process of calculating using the SAD algorithm is to process LEFT PD RAW and RIGHT PD RAW in the second image, if the second image after normalization is used, directly process using the normalization LEFT PD RAW and RIGHT PD RAW in fig. 8, and if the normalization is not used, split the second image to obtain LEFT PD RAW and RIGHT PD RAW.
Fig. 9 is a flowchart of an exemplary disclosed method for calculating a first PD value according to an embodiment of the present application. As shown in fig. 9, the manner of calculating the first PD value based on the second image may include, but is not limited to, the steps of:
S5031: the electronics construct a SAD window, set the SAD window at the starting positions in LEFT PD RAW and RIGHT PD RAW, record all pixels within the SAD window each time it is moved, and calculate the SAD result based on the SAD window pixels at the starting positions.
The SAD window size is (f, b), f being less than or equal to a/2. The size of the second image is a/2*b. The starting position of the SAD window may be the SAD window centered at LEFT PD RAW and RIGHT PD RAW, i.e. the SAD window coincides with the center of LEFT PD RAW and RIGHT PD RAW. Of course, the starting position may be other positions, and the present application is not limited thereto.
In the embodiment of the present application, the number of lines of the SAD window is consistent with the number of lines of the second image, and of course, the number of lines of the SAD window may not be consistent, and the number of lines of the SAD window is smaller than the number of lines of the second image, and at this time, the SAD window may move up and down (left and right in the present application) at RIGHT PD RAW, and the size of the SAD window is not limited, and the moving mode is not limited.
And acquiring pixels in SAD windows in LEFT PD RAW and RIGHT PD RAW in the electronic equipment, and obtaining the sum of absolute values of differences between pixels at positions in the SAD windows. The result of SAD can be expressed as:
Where S is an integer from 1 to f in turn, t is an integer from 1 to b in turn, and S (S, t) is the pixel value at the (S, t) position in the SAD window in LEFT PD RAW; t (s, T) is the pixel value at the (s, T) position in the SAD window in RIGHT PD RAW. The result of the SAD of the initial position may be expressed as SAD0.
S5032: the electronic device sequentially moves the SAD window leftwards from the RIGHT PD RAW initial position, records all pixel points in the SAD window moved each time, and calculates SAD results based on the SAD window pixel points moved each time.
The electronic device may be configured with a preset number of times, and the SAD window may be moved to the left a preset number of times at most. The SAD window may be moved to RIGHT PD RAW left side edges without limiting the number of moves.
It should be noted that, the method of calculating the PD value based on the second image may be other algorithms. For example, the mean absolute difference algorithm (MAD), the sum of squares error algorithm (SSD), the sum of squares average error algorithm (MSD), the normalized product correlation algorithm (NCC), the Sequential Similarity Detection Algorithm (SSDA), the hadamard transform algorithm (SATD), etc., the manner of fig. 9 of the present application is only one of them, and the present application is not limited thereto.
Fig. 10 is a schematic diagram of a moving SAD window calculation of a first PD value according to an exemplary disclosed embodiment of the present application. As shown in fig. 10, the SAD window is shifted at RIGHT PD RAW by a shift step of 1, and there are partial pixel overlaps before and after the shift. The SAD window remains unchanged at all times LEFT PD RAW.
For each movement, the electronic device performs a process of recording all pixels in the SAD window of each movement and calculating the SAD result based on the SAD window pixels of each movement, and the description thereof in S5031 may be referred to, and details thereof are omitted. The SAD results of the left movement are SAD-1, SAD-2, … … and SAD-h total h SAD results after calculation. h is a positive integer.
S5033: the electronic device sequentially moves the SAD window from the RIGHT PD RAW initial position to the right, records all pixel points in the SAD window moved each time, and calculates SAD results based on the SAD window pixel points moved each time.
The execution of S5033 may refer to the related description of S5032, which is not described in detail. The SAD result of the left shift is calculated as SAD1, SAD2, … …, SADh total h SAD results. Wherein the SAD window on LEFT PD RAW remains unchanged.
Illustratively, as shown in FIG. 10, the SAD window is moved to RIGHT PD RAW right side edges over multiple moves, ending the move process.
The execution of S5032 and S5033 is not sequential, and S5032 may be executed first and then S5033 may be executed, or S5033 may be executed first and then S5032 may be executed.
S5034: the electronics determine the minimum of all SAD results as the first PD value.
The electronic device may calculate the first PD value as min [ SAD-h, … …, SAD-2, SAD-1, SAD0, SAD1, SAD2, … …, SADh ].
And obtaining X, M and N first PD values from the second image at the first position in each frame of the first image.
S504: the electronic device calculates an evaluation chart based on the first PD value and the motor position corresponding to each frame when the second image of the X frame is acquired.
The evaluation chart is used for evaluating focusing performance of the camera module.
The electronic device may draw a dot pattern of first PD values of the second image at the same position in the first image based on the motor position according to the second image, obtain m×n PD dot patterns, perform linear regression fit on first PD (i, j) of the second image at the same position in the first image of the X frame, and calculate an error of the (i, j) position at this time, i is an integer from 1 to M in order, and j is an integer from 1 to N in order. Fig. 11 is a schematic diagram of an exemplary disclosed linear fit of first PD values of a second image at a different lens position in accordance with an embodiment of the present application. As shown in fig. 11, the X-frame second images are respectively located at the same positions of the X-frame first images, that is, it can be understood that the position coordinates of the X-frame second images in the first images are the same.
In an embodiment of the present application, the evaluation chart may include one or more of the following three evaluation charts. The first evaluation chart is a chart of a scattered point formed by the motor position of the X-frame second image and the first PD value and a fitting line. The second evaluation chart is a line chart of K broken lines formed by the motor position of the X-frame second image and the first PD value under K different brightness environments. The third evaluation chart is a line chart of two broken lines formed by the motor position of the second image of the X frame after the non-normalization and the first PD value.
Embodiment 1: the evaluation map includes a map of the motor position of the X-frame second image and the scatter points and fitted lines formed by the first PD values.
As shown in fig. 11, the position (1, 1) in the X-frame first image is taken as an explanation, the first PD of the second image at the position (1, 1) in the X-frame first image is plotted to obtain a scatter diagram (which may be a line diagram, not limited thereto), and a linear regression fit is performed to calculate an error value based on the fit value and the true value.
The error value of the (i, j) position may be RMSE and/or linear regression error, or may be an error calculated in other ways, which is not limited by the present application. The ambient brightness of the shooting module is the dark light environment, and the ambient brightness can be between 0 and 10 lux.
And in the process of drawing the evaluation graph, processing according to the lens position (which can be represented by a frame number or a code) and the PD value in the first image of the X frame to obtain a fitting value and an actual value of linear regression. The RMSE may then be calculated as:
Wherein y is an actual value, For fitting values, X is a value from 1 to X in order, X representing the first image of the X frame.
It should be noted that, according to the positions of the different second images in the first image, a fitting curve and an actual curve of the linear regression may be drawn, and m×n RMSE values may be calculated correspondingly. Fig. 12 is a partial schematic view of a first image of a PD polyline linear fit and RMSE values, as shown in an embodiment of the present application. As shown in fig. 12, after linear regression fitting is performed on m×n second images in the first image, each of the divided second images obtains a fitting line and a scattered point, and the RMSE value is calculated. The more scattered the scatter diagram is, or the larger the fluctuation of the line diagram is, the larger the error value is, and the worse the focusing performance of the camera module is.
Through the calculation process, the focusing performance can be compared and evaluated according to the magnitude of the error value.
Table 1 is a table showing Root Mean Square Error (RMSE) estimated by different camera modules at different ambient brightnesses.
TABLE 1
As shown in table 1, the RMSE of model 2 is the smallest in comparison with three models, the smaller the ambient brightness, the larger the difference between the RMSE of model 2 and the RMSE of the other two models, so that the focusing performance of model 2 is the best, and in the selection of the model, model 2 is selected for use.
Table 2 shows a linear regression error table for different camera modules evaluating at different ambient brightness.
TABLE 2
As shown in table 2, compared with three models, the model 2 has the least linear regression error under the same brightness, the smaller the environment brightness, the larger the difference between the model 2 and the RMSE of the other two models, so the focusing performance of the model 2 is best, and the model 2 is selected for use in the module selection.
Embodiment 2: the evaluation chart comprises a line graph of K broken lines formed by motor positions of the X-frame second image and the first PD value under K different brightness environments.
And in the shooting scenes with K different environmental brightnesses, S501-S504 are respectively executed on the K different environmental brightnesses, so that K folding lines are obtained. K is an integer greater than 1. One folding line may be a folding line formed by sequentially connecting the respective scattered points in the above embodiment 1 in order.
Wherein the K folding lines correspond to the K different environmental brightnesses; the K different environment brightness scenes at least comprise a dark light scene and a bright light scene; in a dim light scene, the ambient brightness of the shooting module is 0-3lux; and under a bright light scene, the ambient brightness shot by the camera module is more than 50lux.
Fig. 13A and 13B are line graphs of motor position versus PD values for a set of different ambient brightnesses, as exemplary shown in an embodiment of the present application. FIG. 13A is a PD line graph of model 1 at 200, 10 and 1lux ambient brightness, with respect to lens position, overall linearity at 200lux, and small line ripple amplitude; the integral linearity is poor at 10lux, and the fluctuation range of the broken line is large; the overall linearity is worst at 1lux, and the polyline fluctuation amplitude is also greatest. Therefore, as the luminance decreases, the broken line linearity varies greatly, and the broken line fluctuation width also increases. Fig. 13B is a PD line graph of model 2 at 200, 10 and 1lux ambient brightness, where the overall linearity is good, the linearity of different brightness is substantially consistent, and the fluctuation range of the PD curve is larger and larger as the brightness is reduced, but the fluctuation range is much smaller than the variation of the camera module of model 1. By comparing the PD curve graphs, the model 2 linearity performance is better, and the focusing effect is better. Through the evaluation, the more consistent the linearity is, and the better the linearity is, the better the focusing effect of the camera module is.
Embodiment 3: the evaluation chart includes a line graph of two broken lines formed by motor positions of the second image of the X frame after the non-normalization and the first PD value.
Fig. 14A and 14B are line diagrams of motor positions and PDs of two types of camera modules with or without normalized X-frame second images. As shown in FIG. 14A, the dark polyline is a polyline that is not calculated using mean-std normalization; the light-colored polyline is a polyline calculated by mean-std normalization. The comparison shows that the PD linearity after normalization is better, the PD linearity change of the normalization processing of the camera module of the model of FIG. 14A is larger, and the PD linearity change of the normalization processing of the camera module of the model of FIG. 14B is smaller. The original image without normalization processing has better performance, the normalization processing optimizes the image more, and the image pickup module of the model of fig. 14B has better performance. Therefore, the more consistent the two folding lines with or without normalization processing are, the better the focusing performance of the camera module is.
In the above three embodiments, the electronic device may perform at least one of them, so as to evaluate the performance of the camera module, and multiple modes may be performed, so that the dimension of judgment is more, and the quality of focusing performance of the camera modules with different models can be measured more objectively and effectively, so as to ensure the reliability of evaluation. The normalization process may refer to the relevant content of fig. 9 and fig. 10, which is not described in detail.
Alternatively, the electronic device may divide the first image by m×n to obtain the evaluation chart, and execute the above three embodiments.
Fig. 15A to 15F are schematic diagrams of a line graph and RMSE results when a set of two models is provided in the embodiment of the present application, where M is 10×10, 15×15, and 20×20, respectively. Fig. 15A is a line drawing and RMSE calculated to obtain coordinates of different positions by dividing the first image by the first model camera module according to 10×10. Fig. 15B shows a second type of camera module, where m×n is 10×10; fig. 15C shows that the corresponding camera module is of the first type, and m×n is 15×15; fig. 15D shows a second model of the camera module, where m×n is 15×15; fig. 15E shows a first model of the camera module, where m×n is 20×20; fig. 15F shows a second type of camera module, where m×n is 20×20.
The maximum fluctuation of 20 x 20 and the minimum fluctuation of 10 x 10 among 10 x 10, 15 x 15 and 20 x 20 in different M x N of the same model indicate that the finer the first image division is, the larger the RMSE result is. In the focusing process, the focusing condition needs to be considered when the focusing target is small and the light condition is poor, so that the finer the M is divided, the higher the requirement on focusing details is, and the more likely the comparison effect is. When the two types of camera modules have better results and cannot compare the focusing quality, the value of M and N can be increased, the contrast is improved, and the comparison accuracy and the evaluation reliability are ensured.
Compared with the two models, the curve fluctuation degree of the first model is smaller than that of the second model in 10×10,15×15 and 20×20, which shows that the focusing effect of the first model is better.
In the case of the SAD algorithm, the lens position is very close or the motor position and PD value should be close, and the linearity of the presentation should be better. I.e. in the case of ideal focusing, the motor position in focus and the PD value in any ambient brightness change linearly. When the lens position (the motor position linearly changes) is linearly moved, a PD value is obtained, and the linear error of the PD value is measured; under different brightness environments, linearity difference and the like, the advantages and disadvantages of different camera modules can be compared, and the accuracy of focusing evaluation and the reliability of evaluation basis are ensured. In addition, due to dark light in the environment, details are missing, noise is large, and calculation errors of PD values can be caused. Therefore, before the PD value is calculated, the image is divided into smaller parts, and the accuracy of focusing in a small range is ensured in the actual focusing process.
In combination with the experimental mode shown in fig. 5, the application provides a simulation method. FIG. 16 is a schematic diagram of a simulation method flow of focus evaluation of an exemplary disclosure of an embodiment of the application. As shown in fig. 16, the simulation method of focus evaluation may include, but is not limited to, the following steps:
The electronic device can set scene parameters by means of simulation software, and is initialized based on the lens and image sensor data of the camera system corresponding to the camera module. The actual evaluation object in fig. 5 is a camera module, and the object simulation camera system for simulation evaluation in fig. 16. The system parameters of the camera system may be derived from corresponding parameters of the camera module.
S1601: the electronic device sets a second scene parameter.
The second scene parameters include light source parameters, shooting object parameters. The light source parameters may include parameters such as illumination equipment and ambient brightness, and the photographing object parameters may include parameters such as a pattern, a material, and the like of the photographing object. The second scenario parameters described above require that the actual test preset scenario in fig. 5 remain consistent.
S1602: the electronic equipment acquires the information of the camera module and sets the parameters of the lens and the parameters of the image sensor.
The electronic equipment reads the camera module file to acquire camera module information, and acquires lens parameters from the camera module file. The camera module file is generally from a manufacturer, and the manufacturer can obtain the camera module file in the process of producing the camera module.
The lens parameters may include principal light angle (CHIEF RAY ANGLE, CRA), object distance, light field distribution information, and the like. The principal ray angle is the maximum angle of the ray that can be focused onto a pixel. The light field distribution information refers to the light field distribution of the focal plane output by defocus values.
The image sensor parameters (image sensor) may include pixel count, PD density, pixel size (pixel), and so forth. The number of pixels, i.e., the matrix size formed by the base units (pixels) of the image sensor.
The electronic device may determine the microlens parameters based on the principal light angle and the pixel size, generating a microlens. The microlens parameters may include parameters such as a radius of curvature R0, a displacement s0 of the microlens under each field, and the like. The electronic device can set the parameters of the micro lens into the camera system through simulation software, so that the micro lens can be generated.
S1603: the electronic device sets a lens initialization and a sensor noise model initialization.
The noise actually present by the sensor may include readout noise, fixed noise, and poisson noise. The electronic device may set the corresponding noise model into the sensor parameters for noise model initialization.
In the shooting process of an actual shooting module, the number of photons which can be captured by an image sensor is random, the random quantity can surround the photons to form poisson distribution, and the fluctuation of the number of photons received by pixels of the image sensor in unit time is formed to be visible noise on an image, which is called poisson noise.
In addition, readout noise is the readout noise of the sensor because the readout from the sensor, gain, and the like are affected by voltage fluctuations during imaging of the image sensor, resulting in the original value deviating from an ideal value proportional to photon count, and thus fluctuations in the original value caused by the signal processing electronics constitute the readout noise of the sensor.
Ambient temperature, exposure, etc., also introduce noise, which is referred to as fixed noise.
The image sensor noise model may be composed of readout noise, fixed noise, and poisson noise as described above. The electronic device may acquire the noise model of the camera module in advance, and initialize the sensor noise model, that is, set the noise model into the simulated camera system.
In addition, the electronic equipment can simulate the gain loading of the camera system and set the data related to the gain loading.
S1604: the electronic device obtains an X-frame first image of X times of focus pushing with a motor step.
S1605: and the electronic equipment divides each first image to obtain M.N second images.
S1606: the electronic device obtains a first PD value for the X frame second image.
S1607: the electronic device calculates an evaluation chart based on the first PD value and the motor position corresponding to each frame when the second image of the X frame is acquired.
The relevant content of S1604 to S1607 may refer to the content of S501 to S504, which will not be described here.
According to the calculation processing of the camera system in the simulation process, a simulation result can be obtained, and the advantages and disadvantages of different signal camera systems can be compared based on the result. Fig. 17A and 17B are evaluation diagrams of a set of simulated lens positions and PDs. As shown in fig. 17A, fig. 17A is a line graph of simulated lens positions and PD of model 4. Fig. 17B is a line drawing of simulated lens positions and PD of model 3. By combining the calculation result of the actual measurement end, the camera module of the model of fig. 17B is better than the camera module of the model of fig. 17A.
Fig. 18A to 18E are evaluation diagrams of lens positions and PDs at different PD densities according to an embodiment of the present application. In fig. 18A to 18E, the PD densities were 12%, 6.2%, 3.1%, 1.6% and 1.78% in this order. It can be seen from the comparison that the larger the PD density is, the smaller the dispersion degree of the scattered points is, the better the focusing effect is, the correlation with the sensor is, and the better the focusing performance is. Wherein the motor position (lens position) can be represented by a lens code.
Fig. 19 is a plot of PD density versus linearity error for an example implementation of an embodiment of the present application. As shown in fig. 19, as the PD density increases, the linearity error decreases, and the focusing performance is better.
Fig. 20 is a schematic diagram of a hardware structure of an electronic device according to an exemplary embodiment of the present application.
The structure of the electronic device is described below. As shown in fig. 20, a processor 2001, a memory 2002, and a bus 2003 may be included. The memory 2002 may be provided independently and may be connected to the processor 2001 via a bus 2003. Memory 2002 may also be integrated with processor 2001. Bus 2003 is used to enable the connections between these components, among other things. Of course, the electronic device may also include other modules. Such as a power module, etc., the present application is not limited.
Wherein the processor 2001 is adapted to control the execution of the operations performed in the above embodiments, when the computer program instructions stored in the memory 2002 are executed. The electronic device or the hardware device in the electronic device may also be used to execute the various methods executed by the electronic device in the embodiments of the methods of fig. 5 and 16, which are not described herein.
Optionally, if the electronic device is the first image of the X frame acquired by the electronic device, the electronic device may further include a camera module, where the camera module may be connected to the processor 2001, and in the process of executing the computer program instructions by the processor 2001, the camera module may be controlled to acquire the first image of the X frame.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.

Claims (13)

1. A focus evaluation method, the method being applied to an electronic device, the method comprising:
The electronic equipment acquires a first PD value of an X-frame second image;
the electronic equipment calculates an evaluation chart based on the first PD value and the motor position corresponding to each frame when the second X-frame image is acquired, wherein the evaluation chart is used for evaluating the focusing performance of the first camera module;
wherein the second image is a partial image after the first image is divided; the X frame second images are respectively positioned at the same positions of the X frame first images; the first image of the X frame is a PD raw image which is sequentially acquired by the first camera module when the motor pushes X times of focusing with the step length of the first motor; the motor position of the X frame second image is the motor position after the X times of pushing the motor; and X is an integer greater than 1.
2. The method according to claim 1, wherein in the case where the evaluation map includes a scatter plot of the first PD value and the motor position corresponding to each frame of the X-frame second image, the method further includes:
the electronic equipment carries out linear return fitting on the scatter diagram and draws a fitting line;
And the electronic equipment calculates a first error value based on the scatter diagram and the fitting line, wherein the error value is used for evaluating the focusing performance of the first camera module.
3. Method according to claim 2, characterized in that the error value comprises a root mean square error RMSE and/or a linear regression error.
4. The method of claim 2, wherein the first camera module is configured to capture an image at an ambient brightness of 0-10lux.
5. The method according to any one of claims 1 to 4, wherein when the first camera module is placed in K different environmental brightnesses, the evaluation chart includes K folding lines of the first PD value and a motor position corresponding to each frame of the X-frame second image, the K folding lines being used to evaluate focusing performance of the first camera module; the K folding lines correspond to the K different environmental brightnesses; and K is an integer greater than 1.
6. The method of claim 5, wherein the K different ambient brightness scenes comprise at least a dim light scene and a bright light scene;
under the dim light scene, the ambient brightness of the first camera module when shooting is 0-3lux; and under the bright scene, the ambient brightness of the first camera module when shooting is greater than 50lux.
7. The method of any one of claims 1-4, wherein the assessment graph comprises a first polyline and a second polyline; the first fold line is plotted by the motor position and the first PD value of the second image of the non-normalized X-frame; the second broken line is drawn by the motor position and the first PD value of the normalized X-frame second image; the first folding line and the second folding line are used for evaluating focusing performance of the first camera module.
8. The method of claim 7, wherein the method further comprises:
the electronic equipment acquires the second image of the non-normalized X frame;
the electronic equipment splits the second non-normalized image of each frame into a left image and a right image;
and the electronic equipment normalizes the left image and the right image respectively based on a Mean-Std normalization algorithm to obtain a normalized X-frame second image.
9. The method according to any one of claims 1-4, wherein after a frame of the first image is divided, obtaining M x N frames of second images, and the position coordinates of the second images in the first image are (i, j); the i is an integer from 1 to N, and the j is an integer from 1 to M;
the X frame second images are respectively positioned at the same positions of the X frame first images, and the method comprises the following steps: the position coordinates of the X frame second image in the first image are the same.
10. The method of claim 9, wherein the larger the M x N, the larger the error value.
11. The method according to claim 9, wherein the method further comprises:
The electronic equipment adjusts the M and/or the N to obtain M1 and N1;
the electronic equipment divides the first image of the X frame based on the M1 and the N1 to obtain a second image of the X1N 1 frame; the X M1X N1 frame second image is used to continue calculating the evaluation map.
12. An electronic device, comprising: one or more processors and one or more memories; the one or more processors being coupled with the one or more memories, the one or more memories being configured to store computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-11.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1-11.
CN202410252387.3A 2024-03-06 2024-03-06 Focusing evaluation method and electronic equipment Pending CN118042110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410252387.3A CN118042110A (en) 2024-03-06 2024-03-06 Focusing evaluation method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410252387.3A CN118042110A (en) 2024-03-06 2024-03-06 Focusing evaluation method and electronic equipment

Publications (1)

Publication Number Publication Date
CN118042110A true CN118042110A (en) 2024-05-14

Family

ID=90989215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410252387.3A Pending CN118042110A (en) 2024-03-06 2024-03-06 Focusing evaluation method and electronic equipment

Country Status (1)

Country Link
CN (1) CN118042110A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106556960A (en) * 2015-09-29 2017-04-05 宁波舜宇光电信息有限公司 Out of focus conversion coefficient verification method
US20180176452A1 (en) * 2016-12-19 2018-06-21 Intel Corporation Method and system of self-calibration for phase detection autofocus
CN109451304A (en) * 2018-12-31 2019-03-08 深圳市辰卓科技有限公司 A kind of camera module batch focusing test method and system
CN110430426A (en) * 2019-09-06 2019-11-08 昆山丘钛微电子科技有限公司 The test method and device of phase-detection auto-focusing performance
CN113014790A (en) * 2019-12-19 2021-06-22 华为技术有限公司 Defocus conversion coefficient calibration method, PDAF method and camera module

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106556960A (en) * 2015-09-29 2017-04-05 宁波舜宇光电信息有限公司 Out of focus conversion coefficient verification method
US20180176452A1 (en) * 2016-12-19 2018-06-21 Intel Corporation Method and system of self-calibration for phase detection autofocus
CN109451304A (en) * 2018-12-31 2019-03-08 深圳市辰卓科技有限公司 A kind of camera module batch focusing test method and system
CN110430426A (en) * 2019-09-06 2019-11-08 昆山丘钛微电子科技有限公司 The test method and device of phase-detection auto-focusing performance
CN113014790A (en) * 2019-12-19 2021-06-22 华为技术有限公司 Defocus conversion coefficient calibration method, PDAF method and camera module

Similar Documents

Publication Publication Date Title
Abdelhamed et al. A high-quality denoising dataset for smartphone cameras
US10997696B2 (en) Image processing method, apparatus and device
JP5075757B2 (en) Image processing apparatus, image processing program, image processing method, and electronic apparatus
CN109255810B (en) Image processing apparatus and image processing method
TWI393980B (en) The method of calculating the depth of field and its method and the method of calculating the blurred state of the image
CN104285173A (en) Focus detection device
CN108156369B (en) Image processing method and device
US11399134B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US7269281B2 (en) Method for measuring object based on image and photographing apparatus
KR20180132210A (en) Method and Device for making HDR image by using color response curve, camera, and recording medium
JP3938122B2 (en) Pseudo three-dimensional image generation apparatus, generation method, program therefor, and recording medium
JP2008281481A (en) Apparatus and method for measuring resolution
JP2015142364A (en) Image processing device, imaging apparatus and image processing method
JP2017138199A (en) Image processing device, imaging device, and image processing method
JP6818969B2 (en) Information processing equipment, information processing methods and information processing programs
JP7446797B2 (en) Image processing device, imaging device, image processing method and program
US10339665B2 (en) Positional shift amount calculation apparatus and imaging apparatus
CN118042110A (en) Focusing evaluation method and electronic equipment
CN108781280B (en) Test method, test device and terminal
US10225537B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
JP2019152807A (en) Focus detection device and focus detection method
CN109565544B (en) Position designating device and position designating method
CN111147760B (en) Light field camera, luminosity adjusting method and device thereof and electronic equipment
CN113379835B (en) Calibration method, device and equipment of detection equipment and readable storage medium
US20220392155A1 (en) Image processing device, imaging apparatus, image processing method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination