CN116363174A - Parameter calibration method, storage medium, co-processing chip and electronic equipment - Google Patents

Parameter calibration method, storage medium, co-processing chip and electronic equipment Download PDF

Info

Publication number
CN116363174A
CN116363174A CN202111615151.4A CN202111615151A CN116363174A CN 116363174 A CN116363174 A CN 116363174A CN 202111615151 A CN202111615151 A CN 202111615151A CN 116363174 A CN116363174 A CN 116363174A
Authority
CN
China
Prior art keywords
camera module
depth map
accuracy
image
calibration parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111615151.4A
Other languages
Chinese (zh)
Inventor
曾玉宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111615151.4A priority Critical patent/CN116363174A/en
Publication of CN116363174A publication Critical patent/CN116363174A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the application discloses a parameter calibration method of a camera module, a storage medium, a co-processing chip and electronic equipment, wherein the embodiment of the application acquires a first image output by the camera module of the electronic equipment and acquires the current calibration parameters of the camera module; obtaining a depth map according to the first image and the current calibration parameters; detecting whether the accuracy of the depth map is lower than a preset threshold value; and when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain new calibration parameters, and updating the current calibration parameters based on the new calibration parameters. By adopting the scheme of the embodiment of the application, the abnormality of the camera module can be timely detected, and when the abnormality is detected, the calibration parameters of the camera module can be calibrated, so that the accuracy of image processing of the electronic equipment is improved.

Description

Parameter calibration method, storage medium, co-processing chip and electronic equipment
Technical Field
The application relates to the technical field of electronic equipment, in particular to a parameter calibration method of a camera module, a storage medium, a co-processing chip and electronic equipment.
Background
With the continuous development of intelligent terminal technology, electronic devices (such as smart phones, tablet computers, etc.) are becoming more and more popular. Most electronic devices are provided with camera modules. In general, before the electronic device leaves the factory, the camera module is calibrated, and calibration parameters are obtained and stored in the electronic device. However, in the use of electronic devices, phenomena such as device falling and collision may occur, or the camera is aged due to long use time of the devices, which may result in calibration parameters being accurate, and when the calibration parameters are inaccurate, the image processing effect will be reduced when the subsequent image processing algorithm uses the parameters to perform image synthesis, blurring and other processes.
Disclosure of Invention
The embodiment of the application provides a parameter calibration method of a camera module, a storage medium, a co-processing chip and electronic equipment, which can detect and calibrate the accuracy of calibration parameters and improve the image processing effect.
In a first aspect, an embodiment of the present application provides a method for calibrating parameters of a camera module, including:
acquiring a first image output by a camera module of electronic equipment;
obtaining a depth map according to the first image and the current calibration parameters;
Detecting whether the accuracy of the depth map is lower than a preset threshold value;
and when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain new calibration parameters, and updating the current calibration parameters based on the new calibration parameters.
In a second aspect, embodiments of the present application further provide a computer readable storage medium having a computer program stored thereon, which when run on a computer causes the computer to perform a method for calibrating parameters of a camera module as provided in any embodiment of the present application.
In a third aspect, an embodiment of the present application further provides a co-processing chip, including a central processing unit, where the central processing unit is configured to execute a parameter calibration method of a camera module provided in any embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory, where the memory stores a computer program, where the processor is configured to execute a parameter calibration method of a camera module according to any one embodiment of the present application by calling the computer program.
According to the technical scheme, the first image output by the camera module of the electronic equipment is obtained, the depth map is obtained through the first image and the current calibration parameters of the camera module, the accuracy of the depth map is detected, whether the accuracy of the depth map is lower than the preset threshold value or not is confirmed, when the accuracy of the depth map is lower than the preset threshold value, the camera module can be judged to be likely to have abnormal phenomena such as aging and the like, the camera module is calibrated to obtain new calibration parameters, and the current calibration parameters are updated based on the new calibration parameters. By adopting the scheme of the embodiment of the application, the abnormality of the camera module can be timely detected, the detection operation can be executed in the background without manual triggering of a user, and the implementation mode has certain convenience; in addition, when detecting the abnormality, calibration parameters of the camera module can be calibrated, and further the processing effect of the electronic equipment on the image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a first flowchart of a method for calibrating parameters of a camera module according to an embodiment of the present application.
Fig. 2 is a second flowchart of a method for calibrating parameters of a camera module according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a process of extracting edge pixels in the method for calibrating parameters of a camera module according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a co-processing chip according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a second structure of an electronic device according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a third structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present application based on the embodiments herein.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The embodiment of the application provides a parameter calibration method of a camera module, and an execution main body of the parameter calibration method of the camera module may be a parameter calibration device of the camera module provided in the embodiment of the application, or an electronic device integrated with the parameter calibration device of the camera module, where the parameter calibration device of the camera module may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a first flowchart of a method for calibrating parameters of a camera module according to an embodiment of the present application. The specific flow of the parameter calibration method of the camera module provided by the embodiment of the application may be as follows:
101. And acquiring a first image output by the camera module and acquiring the current calibration parameters of the camera module.
In general, when an electronic device leaves a factory, various parameters, such as calibration parameters of a camera module, have been calibrated. However, with the use of electronic devices, there may be situations of changing the camera module or changing a certain camera in the camera module, or aging of the camera after long-time use, which may cause inaccuracy of calibration parameters.
In the image measurement process and the machine vision application, in order to determine the interrelation between the three-dimensional geometric position of a point on the surface of a space object and the corresponding point in the image, a geometric model of camera imaging must be established, and parameters of the geometric model are calibration parameters of the camera. The calibration parameters are obtained by calibrating and calculating the camera. The calibration parameters comprise an inner parameter, an outer parameter and a distortion parameter of the camera module, and the process of solving the parameters is called camera calibration or camera calibration. Calibration parameters can be determined through camera calibration, lens distortion can be corrected according to the parameters in the image processing process, corrected images are generated, or three-dimensional scenes can be reconstructed according to images obtained by the parameters.
The calibration parameters are different according to the types of the camera modules.
For a monocular camera, the calibration parameters include an inner parameter, an outer parameter, and a distortion parameter. The internal parameters comprise characteristic related parameters of the camera, such as the focal length of the camera, the principal point coordinates of the camera and the like, and the parameters are only related to the camera; the external parameters refer to the position of the camera in space, and generally refer to a rotation vector R and a translation vector T of the camera in a certain reference coordinate system; the distortion parameters refer to deviation between the actual corresponding pixel position of the object point in the image and the theoretical projection point calculated based on the imaging model in the photographing process of the camera, and the deviation is generally described by radial distortion parameters k1, k2 and k3 and tangential distortion parameters p1 and p 2.
The multi-camera also includes three parameters, i.e., an inner parameter, an outer parameter and a distortion parameter, and the inner parameter and the distortion parameter are the same as those of the monocular camera, wherein the outer parameter includes the outer parameter between different cameras besides the position of each camera in space, and the outer parameter between the cameras is a parameter describing the relative position and posture of one camera relative to the other camera, and is generally represented by a translation vector and a rotation vector, such as: tx, ty, tz and Rx, ry, rz.
With the development of cameras, most of the current camera modules of electronic devices are multi-view cameras. The electronic equipment starts a plurality of cameras in the camera module to shoot in a plurality of modes, synthesizes the output multi-frame images and obtains the output images in the corresponding shooting modes. For example, in the portrait mode, when a specific subject is photographed, other objects except the photographed subject are subjected to blurring processing, in which case a depth map needs to be calculated, and the calculation of the depth map uses calibration parameters of the camera module. When the calibration parameters are inaccurate, the blurring processing effect is poor, for example, false blurring is caused. Therefore, the accuracy of the calibration parameters affects the image effect of the photographed image finally output by the electronic device. Under the condition that the camera is aged, calibration parameters need to be calibrated.
Based on this, the scheme of this application embodiment is through detecting the degree of accuracy of camera, confirms whether the unusual phenomenon such as ageing appears in the camera module, and then confirms whether to need calibrate the calibration parameter of camera module.
When the camera module is abnormal due to aging or collision, the depth map calculated by the electronic equipment is inaccurate, so that the calibration algorithm in the embodiment of the application detects the accuracy of the depth map and judges whether the preset parameters are required to be calibrated according to the detection result.
Firstly, an original image output by a camera module, namely a first image, is obtained. The implementation manner of acquiring the first image in the embodiment of the application may be various, for example, a calibration function is set for the camera module, after the user opens the calibration function, the electronic device opens the camera application to enter the shooting mode, and after the user triggers the shooting instruction, the first image output by the camera module is acquired. Or, in another embodiment, after entering the shooting mode, the electronic device instructs the user to shoot a scene with a distinct outline in a certain brightness environment, so as to improve the accuracy of the depth map detection.
Or, in another embodiment, each time interval is preset, and when the user is detected to take a picture by using the camera module, the original image output by the camera module is stored in the background and is recorded as a first image.
Wherein, in some embodiments, the camera module comprises a plurality of cameras; obtaining a first image output by the camera module, and obtaining current calibration parameters of the camera module, wherein the method comprises the following steps: and acquiring a first image output by each camera in the camera module to obtain a plurality of first images.
Since the image shot by each camera needs to be used in the accuracy detection of the depth map, when the camera module includes a plurality of cameras, the first image output by each camera in the camera module needs to be acquired.
In addition, the electronic equipment also obtains the current calibration parameters of the camera module to be used for calculating the depth map.
102. And obtaining a depth map according to the first image and the current calibration parameters.
And calculating a depth map according to the first image and the acquired current calibration parameters. The depth map contains depth information, and the depth information refers to the distance between a shot object and a camera in the image.
The manner in which the depth map is calculated is different for different types of camera modules. For a monocular camera, shooting the same scene from at least two angles by using a camera module to obtain at least two frames of first images, calculating characteristic point pairs between the two frames of first images according to a characteristic point matching algorithm, and calculating a depth map based on the matched characteristic point pairs and current calibration parameters.
For a multi-view camera, shooting is carried out by using a plurality of cameras of a camera module to obtain corresponding multi-frame first images, feature point pairs between every two frames of first images are calculated according to a feature point matching algorithm, and a depth map is calculated based on the matched feature point pairs and calibration parameters.
103. Whether the accuracy of the depth map is lower than a preset threshold is detected.
After the depth map is calculated, the accuracy of the depth map is evaluated. If the depth map accuracy is low, the edge information therein will have a larger error. Based on this principle, edge pixels of the depth map can be extracted for detection.
For example, in an embodiment, the step of detecting whether the accuracy of the depth map is below a preset threshold may include: synthesizing the plurality of first images to obtain a synthesized image; processing the synthesized image to obtain a gray level image of the synthesized image; performing edge detection processing on the gray level image to obtain a first edge pixel point, and performing edge detection processing on the depth image to obtain a second edge pixel point; calculating a gray value difference value between the second edge pixel point and the first edge pixel point, and determining the accuracy of the depth map according to the gray value difference value, wherein the value of the accuracy is in direct proportion to the absolute value of the gray value difference value; whether the detection accuracy is lower than a preset threshold.
In this embodiment, with the edge pixel point in the gray scale image as a reference, whether the accuracy of the depth image is lower than a preset threshold is detected. Because the inaccuracy of the calibration parameters caused by the aging of the camera does not affect the gray information of the image, the acquired first image can be processed to obtain a corresponding gray image. For example, format conversion processing is performed on the first image, so as to obtain an image in YUV format. Here, YUV is an image format in which chromaticity and luminance are separately represented, where Y is a luminance signal and U and V are chromaticity signals. The YUV image contains gray information, and the gray information can be converted into a gray image by setting the numerical values of a U channel and a V channel in the YUV image. When the camera module comprises a plurality of cameras, in one embodiment, a YUV image can be obtained by converting a first image of any frame, and then a gray level image can be obtained according to the YUV image; alternatively, in another embodiment, the multiple frames of the first image may be subjected to a synthesis process to obtain a synthesized image. Then, the synthesized image is converted to obtain a YUV image, and a gray scale image is obtained according to the YUV image.
After the gray level image is obtained, performing edge detection processing on the gray level image to obtain a first edge pixel point. And carrying out edge detection processing on the depth map to obtain a second edge pixel point. The purpose of using the edge detection algorithm for the image is to acquire points with obvious brightness changes in the image, namely edge pixel points, so that the data size of the image is greatly reduced, irrelevant information is provided, and important interface attributes in the image are reserved. A gray value difference between the second edge pixel point and the first edge pixel point is calculated. The depth map and the gray map correspond to the same scene, and if the accuracy of the depth map is higher, the edge information in the depth map and the edge information in the gray map are not different, so that the accuracy of the depth map can be determined according to the difference between the second edge pixel point and the first edge pixel point. The number of the first edge pixel points and the number of the second edge pixel points may be greater, in this case, the difference between each first edge pixel point and the second edge pixel point at the corresponding position is calculated to obtain a plurality of difference values, and any one of the maximum value, the average value or the median value of the plurality of difference values is taken as the difference value between the second edge pixel point and the first edge pixel point.
And calculating a gray value difference value between the second edge pixel point and the first edge pixel point, and determining the accuracy of the depth map according to the gray value. For example, in one embodiment, the gray value difference is taken as the accuracy of the depth map. For another example, in another embodiment, a preset accuracy corresponding to the calculated gray value difference is determined as the accuracy of the depth map according to a preset relationship between the preset gray value difference and the preset accuracy. The accuracy value is proportional to the absolute value of the gray value difference, that is, the greater the absolute value of the gray value difference, the greater the difference between the edge information in the depth map and the edge information in the gray map, and the lower the accuracy of the depth map.
104. And when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain new calibration parameters, and updating the current calibration parameters based on the new calibration parameters.
After the accuracy of the depth map is calculated, judging whether the accuracy is lower than a preset threshold value, if so, indicating that the calibration parameters of the camera module have larger errors, and calibrating the calibration parameters.
Calibration parameters are then calibrated. In an embodiment, when the accuracy of the depth map is lower than a preset threshold, the step of calibrating the preset parameters of the camera module may include: when the accuracy of the depth map is lower than a preset threshold value, acquiring a plurality of frames of calibration images obtained by shooting by a plurality of cameras; performing corner detection and corner matching processing on the multi-frame calibration image to obtain a plurality of corner pairs; when the number of the corner pairs is greater than the preset number, calibrating the camera module based on the multi-frame calibration images to obtain new calibration parameters; updating the current calibration parameters based on the new calibration parameters.
In this embodiment, when the accuracy of the depth map is detected to be lower than the preset threshold, a prompt message is displayed on the image user interface to prompt the user to shoot the calibration image. Before the mobile phone leaves the factory, a specific calibration plate is generally adopted to calibrate the camera module, such as a checkerboard calibration plate and the like. And the user may have difficulty in providing the image obtained by shooting the specific calibration plate, at this time, the user may be reminded to shoot a scene with obvious object outline, sufficient scene light or larger chromatic aberration. Alternatively, in another embodiment, an image of a checkerboard calibration plate is provided, and the user is prompted to print the image of the calibration plate to a specific size to obtain a calibration image.
After the user executes shooting operation, a plurality of multi-frame images shot by a plurality of cameras are obtained and used as calibration images. And then, carrying out corner detection and corner matching processing on the multi-frame calibration images to obtain a plurality of corner pairs. Judging whether the number of the plurality of corner pairs is larger than the preset number, if so, calibrating the camera module according to the preset calibration parameters to obtain new calibration parameters. For example, the new calibration parameters are calculated according to the Zhang's calibration method. Otherwise, if the detected corner points do not meet the requirements, prompting the user to re-shoot the calibration image.
In some embodiments, after the new calibration parameters are obtained, the current calibration parameters may be updated based on the new calibration parameters, and the new calibration parameters may be written to the memory at the corresponding locations.
In some embodiments, when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain a new calibration parameter, and updating the current calibration parameter based on the new calibration parameter, including: calculating the difference between the new calibration parameter and the current calibration parameter; and when the absolute value of the difference value is larger than the preset difference value, updating the current calibration parameter based on the new calibration parameter. When the absolute value of the difference between the new calibration parameter and the current calibration parameter is smaller than the preset difference, the influence of updating the calibration parameter on the accuracy of the output image of the camera module is not great, and a certain error is considered to exist in calibrating the camera module according to the user shooting image compared with the camera module calibrated in a laboratory, and when the absolute value of the difference between the new calibration parameter and the current calibration parameter is smaller, the smaller difference can be located in the error range, and the current calibration parameter can not be updated. On the contrary, when the absolute value of the difference between the new calibration parameter and the current calibration parameter is larger than the preset difference, and the absolute value of the larger difference is larger than the error range, the current calibration parameter can be updated based on the new calibration parameter, and the new calibration parameter can be written into the corresponding position of the memory.
In an embodiment, the solution of the embodiment of the present application may be applied to an application processing chip of an electronic device. And judging the accuracy of the depth map by the application processing chip, calculating to obtain new calibration parameters when the accuracy is low, and updating the current calibration parameters based on the new calibration parameters.
In another embodiment, in order to improve the image processing capability of the electronic device, a co-processing chip is added between the application processing chip and the camera module, where the co-processing chip may be used to perform some preprocessing on the image. The scheme of the embodiment of the application can be applied to the co-processing chip of the electronic equipment. And judging the accuracy of the depth map by the co-processing chip, calculating to obtain new calibration parameters when the accuracy is low, and updating the current calibration parameters based on the new calibration parameters. Or judging the accuracy of the depth map by the co-processing chip, after calculating the new calibration parameter when the accuracy is lower, sending the new calibration parameter to an application processing chip of the electronic equipment, so that the application processing chip calculates the difference value between the new calibration parameter and the current calibration parameter, and updating the current calibration parameter based on the new calibration parameter when the absolute value of the difference value is larger than the preset difference value.
In particular, the present application is not limited by the order of execution of the steps described, and certain steps may be performed in other orders or concurrently without conflict.
As can be seen from the foregoing, according to the parameter calibration method for a camera module provided by the embodiment of the present application, a first image output by the camera module of an electronic device is obtained, a depth map is obtained through the first image and a current calibration parameter of the camera module, the accuracy of the depth map is detected, and whether the accuracy of the depth map is lower than a preset threshold value is confirmed, when the accuracy of the depth map is lower than the preset threshold value, it can be determined that an abnormal phenomenon such as aging may occur in the camera module, and then the camera module is calibrated to obtain a new calibration parameter, and the current calibration parameter is updated based on the new calibration parameter. By adopting the scheme of the embodiment of the application, the abnormality of the camera module can be timely detected, the detection operation can be executed in the background without manual triggering of a user, and the implementation mode has certain convenience; in addition, when the abnormality is detected, calibration parameters of the camera module can be calibrated, and further accuracy of image processing of the electronic equipment is improved.
The method described in the previous examples is described in further detail below by way of example.
Referring to fig. 2, fig. 2 is a second flow chart of a method for calibrating parameters of a camera module according to an embodiment of the invention. The method is applied to the electronic equipment. The electronic equipment comprises an application processing chip, a co-processing chip connected with the application processing chip and a camera module connected with the co-processing chip, wherein the camera module comprises a main camera and an auxiliary camera, the auxiliary camera can be one or more, and the method comprises the following steps:
201. and acquiring first images output by the main camera and the auxiliary camera to obtain a plurality of first images.
The implementation manner of acquiring the first image in the embodiment of the application may be various, for example, a calibration function is set for the camera module, after the user opens the calibration function, the electronic device opens the camera application to enter the shooting mode, and after the user triggers the shooting instruction, the first image output by the camera module is acquired. Or, in another embodiment, after entering the shooting mode, the electronic device instructs the user to shoot a scene with a distinct outline in a certain brightness environment, so as to improve the accuracy of the depth map detection. Or, in another embodiment, each time interval is preset, when it is detected that the user takes a picture by using the camera module, the original images output by the cameras of the camera module are stored in the background and recorded as the first images.
202. And obtaining the current calibration parameters of the camera module, and calculating to obtain a depth map according to the plurality of first images and the current calibration parameters.
A depth map is calculated from the first image. The depth map contains depth information, and the depth information refers to the distance between a shot object and a camera in the image. And calculating a characteristic point pair between the first image of the main camera and the first image of the auxiliary camera according to a characteristic point matching algorithm, and calculating a depth map based on the matched characteristic point pair and the current calibration parameters.
203. And synthesizing the plurality of first images to obtain a synthesized image, and performing format conversion processing on the synthesized image to obtain a corresponding gray level image.
And synthesizing the multi-frame first image to obtain a synthesized image. Then, the synthesized image is converted to obtain a YUV image, and a gray scale image is obtained according to the YUV image.
204. And carrying out edge detection processing on the gray level image to obtain a first edge pixel point, and carrying out edge detection processing on the depth image to obtain a second edge pixel point.
After the gray level image is obtained, performing edge detection processing on the gray level image to obtain a first edge pixel point. And carrying out edge detection processing on the depth map to obtain a second edge pixel point. Among them, the purpose of using an edge detection algorithm on an image is to acquire points in the image where the brightness change is significant, i.e., edge pixel points.
In some embodiments, the step of performing edge detection processing on the gray scale map to obtain a first edge pixel point, and performing edge detection processing on the depth map to obtain a second edge pixel point may include: dividing the gray level map into M multiplied by N first areas according to a preset dividing mode, and dividing the depth map into M multiplied by N second areas; determining a target second region with a gray value larger than a preset threshold value from M multiplied by N second regions, and determining a target first region corresponding to the target second region from M multiplied by N first regions; and performing edge detection processing on the first area of the target to obtain a first edge pixel point, and performing edge detection processing on the second area of the target to obtain a second edge pixel point. Referring to fig. 3, fig. 3 is a schematic diagram illustrating a process of extracting edge pixels in the method for calibrating parameters of a camera module according to an embodiment of the present application.
In this embodiment, in order to increase the speed of edge detection, the acquired depth map and gray scale image are subjected to region division processing, as shown in fig. 3, and two images are divided into m×n regions in the same manner, where m=n=4 in fig. 3 is merely an example, and in other embodiments, the values of M and N may be set as needed. After dividing the regions, detecting the gray value of each region, wherein the gray value of each region can be the average value of the gray values of all pixel points in the region. And determining target second areas with gray values larger than a preset threshold value from the 16 second areas, selecting target first areas corresponding to the target second areas, performing edge detection processing on the target first areas to obtain first edge pixel points, and performing edge detection processing on the target second areas to obtain second edge pixel points.
205. And calculating a gray value difference value between the second edge pixel point and the first edge pixel point, and determining the accuracy of the depth map according to the gray value difference value, wherein the value of the accuracy is in direct proportion to the absolute value of the gray value difference value.
And then, calculating a gray value difference value between the second edge pixel point and the first edge pixel point, and determining the accuracy of the depth map according to the gray value. Specifically, a gray value difference between the second edge pixel point and the first edge pixel point is calculated, and the accuracy of the depth map is determined according to the gray value. For example, in one embodiment, the gray value difference is taken as the accuracy of the depth map. For another example, in another embodiment, a preset accuracy corresponding to the calculated gray value difference is determined as the accuracy of the depth map according to a preset relationship between the preset gray value difference and the preset accuracy. The accuracy value is proportional to the absolute value of the gray value difference, that is, the greater the absolute value of the gray value difference, the greater the difference between the edge information in the depth map and the edge information in the gray map, and the lower the accuracy of the depth map.
206. And when the accuracy of the depth map is lower than a preset threshold value, acquiring a plurality of frames of calibration images obtained by shooting by the cameras.
After the accuracy of the depth map is calculated, judging whether the accuracy is lower than a preset threshold value, and if so, indicating that calibration parameters and the like of the camera module need to be calibrated.
207. And calibrating the camera module based on the multi-frame calibration images to obtain new calibration parameters.
And calibrating the camera module. And when the accuracy of the detected depth map is lower than a preset threshold, displaying prompt information on the image user interface to prompt a user to shoot a calibration image. Before the mobile phone leaves the factory, a specific calibration plate is generally adopted to calibrate the camera module, such as a checkerboard calibration plate and the like. And the user may have difficulty in providing the image obtained by shooting the specific calibration plate, at this time, the user may be reminded to shoot a scene with obvious object outline, sufficient scene light or larger chromatic aberration. Alternatively, in another embodiment, an image of a checkerboard calibration plate is provided, and the user is prompted to print the image of the calibration plate to a specific size to obtain a calibration image.
After the user executes shooting operation, a plurality of multi-frame images shot by a plurality of cameras are obtained and used as calibration images. And then, carrying out corner detection and corner matching processing on the multi-frame calibration images to obtain a plurality of corner pairs. Judging whether the number of the plurality of corner pairs is larger than the preset number, if so, calibrating the camera module according to the preset calibration parameters to obtain new calibration parameters. For example, the new calibration parameters are calculated according to the Zhang's calibration method.
208. And updating the current calibration parameter based on the new calibration parameter when the difference between the new calibration parameter and the current calibration parameter is larger than the preset difference.
The execution body of the steps 201 to 207 may be a co-processing chip, and the first image output by the camera module is directly sent to the co-processing chip, and the co-processing chip calculates to obtain a new calibration parameter.
After the co-processing chip calculates the new calibration parameters, the new calibration parameters are sent to the application processing chip, the application processing chip executes step 208, calculates the difference between the new calibration parameters and the current calibration parameters, and updates the current calibration parameters based on the new calibration parameters when the absolute value of the difference is greater than the preset difference.
As can be seen from the above, the parameter calibration method of the camera module provided by the embodiment of the invention can timely detect the abnormality of the camera module, and the detection operation can be executed in the background without manual triggering of a user, so that the implementation mode has certain convenience; in addition, after the aging of the camera is detected, calibration parameters of the camera module can be calibrated, and further the accuracy of image processing of the electronic equipment is improved.
In an embodiment, a device for calibrating parameters of a camera module is also provided. The parameter calibration device of the camera module is applied to electronic equipment, and comprises:
The acquisition module is used for acquiring a first image output by the camera module and acquiring the current calibration parameters of the camera module;
the processing module is used for obtaining a depth map according to the first image and the current calibration parameters;
the detection module is used for detecting whether the accuracy of the depth map is lower than a preset threshold value or not;
and the calibration module is used for calibrating the camera module to obtain new calibration parameters when the accuracy of the depth map is lower than a preset threshold value, and updating the current calibration parameters based on the new calibration parameters.
It should be noted that, the parameter calibration device of the camera module provided in the embodiment of the present application and the parameter calibration method of the camera module in the above embodiment belong to the same concept, and any method provided in the parameter calibration method embodiment of the camera module can be implemented by the parameter calibration device of the camera module, and a detailed implementation process of the method is referred to in the parameter calibration method embodiment of the camera module and will not be described herein.
As can be seen from the above, the parameter calibration device for a camera module provided by the embodiment of the present application obtains a first image output by the camera module of an electronic device, obtains a depth map through the first image and a current calibration parameter of the camera module, detects the accuracy of the depth map, and determines whether the accuracy of the depth map is lower than a preset threshold, when the accuracy of the depth map is lower than the preset threshold, it can be determined that the camera module may have abnormal phenomena such as aging, and then calibrates the camera module to obtain a new calibration parameter, and updates the current calibration parameter based on the new calibration parameter. By adopting the scheme of the embodiment of the application, the abnormality of the camera module can be timely detected, the detection operation can be executed in the background without manual triggering of a user, and the implementation mode has certain convenience; in addition, when the abnormality is detected, calibration parameters of the camera module can be calibrated, and further accuracy of image processing of the electronic equipment is improved.
The embodiment of the application also provides a co-processing chip. Referring to fig. 4, fig. 4 is a schematic structural diagram of a co-processing chip according to an embodiment of the present application. The co-processing chip 301 includes a central processor 3011, the central processor 3011 being configured to:
acquiring a first image output by the camera module and acquiring the current calibration parameters of the camera module;
obtaining a depth map according to the first image and the current calibration parameters;
detecting whether the accuracy of the depth map is lower than a preset threshold value;
and when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain new calibration parameters, and updating the current calibration parameters based on the new calibration parameters.
In some embodiments, the central processor 3011 is to: calculating the difference value between the new calibration parameter and the current calibration parameter;
and updating the current calibration parameter based on the new calibration parameter when the absolute value of the difference value is larger than a preset difference value.
In some embodiments, the central processor 3011 is to: and sending the new calibration parameters to an application processing chip of the electronic equipment, so that the application processing chip calculates the difference value between the new calibration parameters and the current calibration parameters, and updating the current calibration parameters based on the new calibration parameters when the absolute value of the difference value is larger than a preset difference value.
In some embodiments, the camera module comprises a plurality of cameras; the central processing unit 3011 is configured to: and acquiring a first image output by each camera in the camera module to obtain a plurality of first images.
In some embodiments, the central processor 3011 is to: synthesizing the plurality of first images to obtain a synthesized image;
processing the synthesized image to obtain a gray level image of the synthesized image;
performing edge detection processing on the gray level image to obtain a first edge pixel point, and performing edge detection processing on the depth image to obtain a second edge pixel point;
calculating a gray value difference value between the second edge pixel point and the first edge pixel point, and determining the accuracy of a depth map according to the gray value difference value, wherein the value of the accuracy is in direct proportion to the absolute value of the gray value difference value;
and detecting whether the accuracy is lower than a preset threshold value.
In some embodiments, the central processor 3011 is to: dividing the gray level map into M multiplied by N first areas according to a preset dividing mode, and dividing the depth map into M multiplied by N second areas;
determining a target second region with a gray value larger than a preset threshold value from the M multiplied by N second regions, and determining a target first region corresponding to the target second region from the M multiplied by N first regions;
And performing edge detection processing on the first target area to obtain a first edge pixel point, and performing edge detection processing on the second target area to obtain a second edge pixel point.
In some embodiments, the central processor 3011 is to:
when the accuracy of the depth map is lower than a preset threshold value, acquiring multi-frame calibration images obtained by shooting by the cameras;
performing corner detection and corner matching processing on the multi-frame calibration image to obtain a plurality of corner pairs;
when the number of the corner pairs is larger than the preset number, calibrating the camera module based on the multi-frame calibration images to obtain new calibration parameters;
updating the current calibration parameters based on the new calibration parameters.
The embodiment of the application also provides electronic equipment. The electronic equipment can be a smart phone, a tablet personal computer and other equipment. Referring to fig. 5, fig. 5 is a schematic diagram of a first structure of an electronic device according to an embodiment of the present application. The electronic device 300 includes a co-processing chip 301, an application processing chip 302 and a camera module 303 connected to the co-processing chip. The co-processing chip 301 is configured to: acquiring a first image output by a camera module of electronic equipment; obtaining a depth map according to the first image and the current calibration parameters; detecting whether the accuracy of the depth map is lower than a preset threshold value; and calibrating the camera module to obtain new calibration parameters when the accuracy of the depth map is lower than a preset threshold value, and updating the current calibration parameters based on the new calibration parameters.
The embodiment of the application also provides electronic equipment. The electronic equipment can be a smart phone, a tablet personal computer and other equipment. Referring to fig. 6, fig. 6 is a schematic diagram of a second structure of the electronic device according to the embodiment of the present application. The electronic device 400 comprises a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling computer programs stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
Memory 402 may be used to store computer programs and data. The memory 402 stores a computer program having instructions executable in a processor. The computer program may constitute various functional modules. The processor 401 executes various functional applications and data processing by calling a computer program stored in the memory 402.
In this embodiment, the processor 401 in the electronic device 400 loads the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the following steps, and the processor 401 executes the computer programs stored in the memory 402, so as to implement various functions:
Acquiring a first image output by the camera module and acquiring the current calibration parameters of the camera module;
obtaining a depth map according to the first image and the current calibration parameters;
detecting whether the accuracy of the depth map is lower than a preset threshold value;
and when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain new calibration parameters, and updating the current calibration parameters based on the new calibration parameters.
In some embodiments, referring to fig. 7, fig. 7 is a schematic diagram of a third structure of an electronic device according to an embodiment of the present application. The electronic device 400 further comprises: radio frequency circuit 403, display 404, control circuit 405, input unit 406, audio circuit 407, sensor 408, and power supply 409. The processor 401 is electrically connected to the radio frequency circuit 403, the display 404, the control circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power supply 409, respectively.
The radio frequency circuit 403 is used to transmit and receive radio frequency signals to communicate with a network device or other electronic device through wireless communication.
The display 404 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of images, text, icons, video, and any combination thereof.
The control circuit 405 is electrically connected to the display screen 404, and is used for controlling the display screen 404 to display information.
The input unit 406 may be used to receive entered numbers, character information, or user characteristic information (e.g., fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 406 may include a fingerprint recognition module.
The audio circuit 407 may provide an audio interface between the user and the electronic device through a speaker, microphone. Wherein the audio circuit 407 comprises a microphone. The microphone is electrically connected to the processor 401. The microphone is used for receiving voice information input by a user.
The sensor 408 is used to collect external environmental information. The sensor 408 may include one or more of an ambient brightness sensor, an acceleration sensor, a gyroscope, and the like.
The power supply 409 is used to power the various components of the electronic device 400. In some embodiments, power supply 409 may be logically connected to processor 401 through a power management system, thereby performing functions such as managing charging, discharging, and power consumption through the power management system.
Although not shown in the drawings, the electronic device 400 may further include a camera, a bluetooth module, etc., which will not be described herein.
In this embodiment, the processor 401 in the electronic device 400 loads the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the following steps, and the processor 401 executes the computer programs stored in the memory 402, so as to implement various functions:
acquiring a first image output by the camera module and acquiring the current calibration parameters of the camera module;
obtaining a depth map according to the first image and the current calibration parameters;
detecting whether the accuracy of the depth map is lower than a preset threshold value;
and when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain new calibration parameters, and updating the current calibration parameters based on the new calibration parameters.
As can be seen from the foregoing, the embodiment of the present application provides an electronic device, where the electronic device obtains a first image output by a camera module, obtains a depth map through the first image and a current calibration parameter of the camera module, detects the accuracy of the depth map, and determines whether the accuracy of the depth map is lower than a preset threshold, and when the accuracy of the depth map is lower than the preset threshold, can determine that an abnormal phenomenon such as aging may occur in the camera module, and then calibrates the camera module to obtain a new calibration parameter, and updates the current calibration parameter based on the new calibration parameter. By adopting the scheme of the embodiment of the application, the abnormality of the camera module can be timely detected, the detection operation can be executed in the background without manual triggering of a user, and the implementation mode has certain convenience; in addition, when the abnormality is detected, calibration parameters of the camera module can be calibrated, and further accuracy of image processing of the electronic equipment is improved.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer executes the parameter calibration method of the camera module set in any embodiment.
It should be noted that, those skilled in the art will understand that all or part of the steps in the various methods of the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium, and the computer readable storage medium may include, but is not limited to: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Furthermore, the terms "first," "second," and "third," and the like, herein, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the particular steps or modules listed and certain embodiments may include additional steps or modules not listed or inherent to such process, method, article, or apparatus.
The above describes the parameter calibration method, the storage medium, the co-processing chip and the electronic device of the camera module provided in the embodiments of the present application in detail. The principles and embodiments of the present application are described herein with specific examples, the above examples being provided only to assist in understanding the methods of the present application and their core ideas; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. The parameter calibration method of the camera module is characterized by comprising the following steps of:
acquiring a first image output by the camera module and acquiring the current calibration parameters of the camera module;
obtaining a depth map according to the first image and the current calibration parameters;
detecting whether the accuracy of the depth map is lower than a preset threshold value;
and when the accuracy of the depth map is lower than a preset threshold, calibrating the camera module to obtain new calibration parameters, and updating the current calibration parameters based on the new calibration parameters.
2. The method of claim 1, wherein calibrating the camera module to obtain a new calibration parameter when the accuracy of the depth map is below a preset threshold and updating the current calibration parameter based on the new calibration parameter comprises:
calculating the difference value between the new calibration parameter and the current calibration parameter;
and updating the current calibration parameter based on the new calibration parameter when the absolute value of the difference value is larger than a preset difference value.
3. The method of claim 1, wherein calibrating the camera module to obtain a new calibration parameter when the accuracy of the depth map is below a preset threshold and updating the current calibration parameter based on the new calibration parameter comprises:
and sending the new calibration parameters to an application processing chip of the electronic equipment, so that the application processing chip calculates the difference value between the new calibration parameters and the current calibration parameters, and updating the current calibration parameters based on the new calibration parameters when the absolute value of the difference value is larger than a preset difference value.
4. The method of claim 1, wherein the camera module comprises a plurality of cameras; the obtaining the first image output by the camera module includes:
And acquiring a first image output by each camera in the camera module to obtain a plurality of first images.
5. The method of claim 4, wherein detecting whether the accuracy of the depth map is below a preset threshold comprises:
synthesizing the plurality of first images to obtain a synthesized image;
processing the synthesized image to obtain a gray level image of the synthesized image;
performing edge detection processing on the gray level image to obtain a first edge pixel point, and performing edge detection processing on the depth image to obtain a second edge pixel point;
calculating a gray value difference value between the second edge pixel point and the first edge pixel point, and determining the accuracy of a depth map according to the gray value difference value, wherein the value of the accuracy is in direct proportion to the absolute value of the gray value difference value;
and detecting whether the accuracy is lower than a preset threshold value.
6. The method of claim 5, wherein performing edge detection processing on the gray scale map to obtain a first edge pixel point, and performing edge detection processing on the depth map to obtain a second edge pixel point, comprises:
Dividing the gray level map into M multiplied by N first areas according to a preset dividing mode, and dividing the depth map into M multiplied by N second areas;
determining a target second region with a gray value larger than a preset threshold value from the M multiplied by N second regions, and determining a target first region corresponding to the target second region from the M multiplied by N first regions;
and performing edge detection processing on the first target area to obtain a first edge pixel point, and performing edge detection processing on the second target area to obtain a second edge pixel point.
7. The method according to any one of claims 4 to 6, wherein calibrating the camera module to obtain a new calibration parameter when the accuracy of the depth map is lower than a preset threshold value, and updating the current calibration parameter based on the new calibration parameter, comprises:
when the accuracy of the depth map is lower than a preset threshold value, acquiring multi-frame calibration images obtained by shooting by the cameras;
performing corner detection and corner matching processing on the multi-frame calibration image to obtain a plurality of corner pairs;
when the number of the corner pairs is larger than the preset number, calibrating the camera module based on the multi-frame calibration images to obtain new calibration parameters;
Updating the current calibration parameters based on the new calibration parameters.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when run on a computer, causes the computer to perform the method of calibrating parameters of a camera module according to any of claims 1 to 7.
9. A co-processing chip comprising a central processor, wherein the central processor is configured to perform the method for calibrating parameters of the camera module according to any one of claims 1 to 7.
10. An electronic device comprising a processor and a memory, the memory storing a computer program, characterized in that the processor is adapted to perform the method of calibrating parameters of a camera module according to any of claims 1 to 7 by invoking the computer program.
CN202111615151.4A 2021-12-27 2021-12-27 Parameter calibration method, storage medium, co-processing chip and electronic equipment Pending CN116363174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111615151.4A CN116363174A (en) 2021-12-27 2021-12-27 Parameter calibration method, storage medium, co-processing chip and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111615151.4A CN116363174A (en) 2021-12-27 2021-12-27 Parameter calibration method, storage medium, co-processing chip and electronic equipment

Publications (1)

Publication Number Publication Date
CN116363174A true CN116363174A (en) 2023-06-30

Family

ID=86935603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111615151.4A Pending CN116363174A (en) 2021-12-27 2021-12-27 Parameter calibration method, storage medium, co-processing chip and electronic equipment

Country Status (1)

Country Link
CN (1) CN116363174A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119113A (en) * 2023-10-20 2023-11-24 安徽淘云科技股份有限公司 Camera self-calibration method and device of electronic equipment and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119113A (en) * 2023-10-20 2023-11-24 安徽淘云科技股份有限公司 Camera self-calibration method and device of electronic equipment and electronic equipment
CN117119113B (en) * 2023-10-20 2024-01-23 安徽淘云科技股份有限公司 Camera self-calibration method and device of electronic equipment and electronic equipment

Similar Documents

Publication Publication Date Title
US11170708B2 (en) Gamma correction method and device, display device, and computer storage medium
CN110602473B (en) White balance calibration method and device
CN110784651B (en) Anti-shake method and electronic equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN109151442B (en) Image shooting method and terminal
CN107623818B (en) Image exposure method and mobile terminal
CN109462745B (en) White balance processing method and mobile terminal
CN111145192A (en) Image processing method and electronic device
CN110442521B (en) Control unit detection method and device
US20230364510A1 (en) Image prediction method, electronic device, and storage medium
CN108718388B (en) Photographing method and mobile terminal
CN109819166B (en) Image processing method and electronic equipment
CN110930329A (en) Starry sky image processing method and device
CN111083386B (en) Image processing method and electronic device
CN109754439B (en) Calibration method, calibration device, electronic equipment and medium
CN111145151A (en) Motion area determination method and electronic equipment
CN116363174A (en) Parameter calibration method, storage medium, co-processing chip and electronic equipment
CN110086987B (en) Camera visual angle cutting method and device and storage medium
CN110148167B (en) Distance measuring method and terminal equipment
CN111860064B (en) Video-based target detection method, device, equipment and storage medium
CN109934168B (en) Face image mapping method and device
CN109348212B (en) Image noise determination method and terminal equipment
US20220360707A1 (en) Photographing method, photographing device, storage medium and electronic device
CN108391050B (en) Image processing method and mobile terminal
CN111031265A (en) FSR (frequency selective response) determining method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination