CN115661258A - Calibration method and device, distortion correction method and device, storage medium and terminal - Google Patents

Calibration method and device, distortion correction method and device, storage medium and terminal Download PDF

Info

Publication number
CN115661258A
CN115661258A CN202211130943.7A CN202211130943A CN115661258A CN 115661258 A CN115661258 A CN 115661258A CN 202211130943 A CN202211130943 A CN 202211130943A CN 115661258 A CN115661258 A CN 115661258A
Authority
CN
China
Prior art keywords
depth information
pixel
image
distance
distortion correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211130943.7A
Other languages
Chinese (zh)
Inventor
董晓霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202211130943.7A priority Critical patent/CN115661258A/en
Publication of CN115661258A publication Critical patent/CN115661258A/en
Pending legal-status Critical Current

Links

Images

Abstract

A calibration method and device, a distortion correction method and device, a storage medium and a terminal are provided, wherein the calibration method comprises the following steps: acquiring a planar image, wherein the planar image is obtained by shooting a planar object by a multi-camera module to be calibrated; performing stereo correction and stereo matching on the planar image by using reference calibration parameters corresponding to the multiple camera modules to obtain a first depth information image, wherein the first depth information image comprises depth information values of all pixels, and the depth information values are used for representing the distances between the planar object and the multiple camera modules; and establishing a relation between the distance and a distortion correction coefficient according to the distance between each pixel and a distortion center pixel in the first depth information image and the depth information value of each pixel, wherein the distortion correction coefficient is used for correcting the depth information value of the pixel, and the relation between the distance and the distortion correction coefficient is used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module. Above-mentioned scheme can improve the effect of the demarcation of many camera modules.

Description

Calibration method and device, distortion correction method and device, storage medium and terminal
Technical Field
The embodiment of the invention relates to the technical field of multi-camera module calibration, in particular to a calibration method and device, a distortion correction method and device, a storage medium and a terminal.
Background
With the development of computer technology and multimedia technology, the perception demand of people on the three-dimensional world is continuously increased, and more applications need to acquire the distance (i.e. depth) of a three-dimensional scene relative to a camera, such as three-dimensional reconstruction, human-computer interaction, mode recognition and the like, and the third-dimensional information of an object is represented by using a depth map. However, the general imaging technology can only record a three-dimensional space in a two-dimensional manner, and therefore how to acquire high-quality depth information becomes a technology which is crucial to computer vision.
Common depth map generation methods include stereo matching algorithms including local stereo matching algorithms, global stereo matching algorithms, semi-global stereo matching algorithms, and artificial intelligence-based deep learning algorithms based on conventional methods. Regardless of the depth map generation algorithm, the mainstream algorithm is based on binocular camera calibration to obtain the internal parameters of each camera and the relative external parameters between the cameras. The distortion of the lens can be corrected through calibration, wherein in the manufacturing process of the camera, a certain error exists between the actual curved surface and the ideal curved surface of the lens, and the error can change the refraction direction of light, so that the position of an imaging point is deviated, and the lens distortion is generated. Lens distortion can cause distortion of the image. Typically, the distortion of the camera lens includes radial distortion and tangential distortion.
The stereo correction depends on the calibration effect of the binocular cameras, the binocular calibration obtains the internal parameters of each camera and the relative external parameters of the two cameras, the calibration is generally completed on a module factory production line, a specific calibration plate is shot through a specific environment, and the good calibration effect is completed through a specific calibration algorithm. However, in the process of using the dual-camera device (the device with the dual-camera module), when the camera module (which may be referred to as a module for short) is damaged, disassembled for maintenance, and frequently falls down, the original dual-camera calibration data in the device will fail, so that the three-dimensional application function (such as dual-camera blurring) using the depth map will be abnormal. To restore these functions, it is necessary to replace the module or to recalibrate the module without damage. But for after market points there are often no conditions for in-line calibration schemes. The common after-market scheme is to directly replace the camera module and start the golden calibration parameters. The gold calibration parameter is suitable for most of the modules in the batch to a certain extent, but the effect is often not good as that of single equipment independent calibration, and one of the parameters is mismatching of distortion parameters. Distortion correction is incorrect or incomplete, and certain position errors can be brought to image pixel points of three-dimensional correction.
Disclosure of Invention
The embodiment of the invention solves the technical problem of how to improve the calibration effect of the multi-camera module.
In order to solve the above technical problem, an embodiment of the present invention provides a calibration method for a multi-camera module, including: acquiring a planar image, wherein the planar image is obtained by shooting a planar object by a multi-camera module to be calibrated; performing stereo correction and stereo matching on the planar image by using reference calibration parameters corresponding to the multiple camera modules to obtain a first depth information image, wherein the first depth information image comprises depth information values of all pixels, and the depth information values are used for representing the distances between the planar object and the multiple camera modules; and establishing a relation between the distance and a distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel, wherein the distortion correction coefficient is used for correcting the depth information value of the pixel, and the relation between the distance and the distortion correction coefficient is used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module.
Optionally, the establishing a relationship between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel includes: performing region division on the first depth information image to obtain a plurality of regions; and aiming at each region, establishing the relation between the distance corresponding to each region and the distortion correction coefficient according to the distance between each pixel in each region and the distortion center pixel and the depth information value of each pixel.
Optionally, the establishing, for each region, a relationship between a distance corresponding to each region and a distortion correction coefficient according to a distance between each pixel in each region and the distortion center pixel and a depth information value of each pixel includes: calculating the maximum distance between the pixels in each region and the distortion center pixel for each region, and segmenting the maximum distance to obtain a plurality of distance intervals; and fitting the distance between the pixel and the distortion center pixel and the depth information value of the pixel to each pixel of which the distance from the distortion center pixel belongs to each distance interval to obtain the relation between the distance corresponding to each area and the distortion correction coefficient.
Optionally, the segmenting the maximum distance includes: determining a distance division point according to a variation of the depth information value of the pixels within each region in a direction from the distortion center pixel to a pixel corresponding to the maximum distance, and dividing the maximum distance into a plurality of distance sections based on the distance division point.
Optionally, the fitting the distance between a pixel and the distortion center pixel and the depth information value of the pixel for each pixel whose distance from the distortion center pixel belongs to each distance interval includes: for a first distance interval with the change quantity of the depth information value not larger than a set threshold, performing linear fitting on the distance between each pixel and the distortion center pixel in the first distance interval and the depth information value of each pixel; and for a second distance interval with the change quantity of the depth information value larger than a set threshold, performing polynomial fitting or spline fitting on the distance between each pixel and the distortion center pixel in the second distance interval and the depth information value of each pixel.
Optionally, for a second distance interval in which the variation of the depth information value is greater than a set threshold, performing polynomial fitting or spline fitting on the distance between each pixel and the distortion center pixel in the second distance interval and the depth information value of each pixel, includes: calculating the mean value of the depth information values of part or all of the pixels in the first distance interval; calculating a quotient of the depth information value of each pixel belonging to the second distance interval and the mean value, and taking the calculated quotient as the error rate of the depth information value of each pixel; and performing polynomial fitting or spline fitting on the distance between each pixel in the second distance interval and the distortion center pixel and the error rate of the depth information value of each pixel.
Optionally, the area division is performed on the first depth information image to obtain a plurality of areas, and the area division includes at least one of the following area division modes: taking the distortion central pixel as a center, and adopting one or more preset radiuses to perform region division on the first depth information image; and dividing the first depth information image into four quadrants by taking the distortion central pixel as an origin and adopting a cross-shaped region division mode.
Optionally, the establishing a relationship between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel includes: fitting to obtain a function of the distortion correction coefficient and the distance according to the distance between each pixel in the first depth information image and a distortion center pixel and the depth information value of each pixel; equally dividing the maximum distance between the pixel in the first depth information image and the distortion center pixel into M parts to obtain M length distances, wherein M is a positive integer greater than or equal to 2; and substituting the distances with the lengths of M into the function, respectively calculating distortion correction coefficients corresponding to the distances with the lengths of M, and establishing a relation between the distances and the distortion correction coefficients based on the distortion correction coefficients corresponding to the distances with the lengths of M.
Optionally, the establishing a relationship between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel includes: obtaining a distortion correction coefficient of each pixel according to the distance between each pixel in the first depth information image and a distortion center pixel and the depth information value of each pixel; and obtaining a mask image based on the distortion correction coefficient of each pixel, wherein the mask image comprises the distortion correction coefficient of each pixel in the first depth information image, and the mask image is used for representing the relation between the distance and the distortion correction coefficient.
Optionally, the calibration method for the multiple camera modules further includes: after the first depth information image is obtained, preprocessing the first depth information image, wherein the preprocessing includes at least one of the following: removing noise of the image; filtering; and (5) filling holes.
Optionally, the calibration method for the multiple camera modules further includes: before establishing a relation between a distance and a distortion correction coefficient according to the distance between each pixel and a distortion center pixel in the first depth information image and the depth information value of each pixel, judging whether the depth information value in the first depth information image meets a correction condition, wherein the correction condition comprises at least one of the following conditions: the deviation between the maximum depth information value and the minimum depth information value in the first depth information image is greater than a set deviation, and the number of pixels with depth information values greater than a set threshold in the first depth information image is greater than a set number; and if the correction condition is met, establishing a relation between the distance and the distortion correction coefficient according to the distance between each pixel and the distortion center pixel in the first depth information image and the depth information value of each pixel.
Optionally, the method for obtaining the depth information image includes the steps of obtaining a first depth information image by using a first camera, obtaining a second depth information image by using a second camera, and obtaining a second depth information image by using a plurality of camera modules, where the first depth information image includes: performing stereo correction on the first plane image and the second plane image by using the reference calibration parameters, so that polar lines of the first plane image are horizontally aligned with corresponding polar lines of the second plane image, and obtaining a first corrected image and a second corrected image after stereo correction; performing stereo matching on the first correction image and the second correction image, and calculating a depth information value of each pixel according to the deviation of each pixel in the first correction image and the position of the pixel at the corresponding position in the second correction image; the first depth information image is derived based on a depth information value of each pixel.
The embodiment of the invention also provides a distortion correction method of the depth information image, which comprises the following steps: acquiring an actual scene image, wherein the actual scene image is obtained by shooting through a calibrated multi-camera module; performing stereo correction and stereo matching on the actual scene image by using reference calibration parameters corresponding to the multiple camera modules to obtain a second depth information image; acquiring the relation between the distance and a distortion correction coefficient, and carrying out distortion correction on the second depth information image according to the relation between the distance and the distortion correction coefficient to obtain a depth information image after distortion correction; the relation between the distance and the distortion correction coefficient is obtained by calibrating the multi-camera module by adopting any one calibration method of the multi-camera module.
Optionally, the performing distortion correction on the second depth information image according to the relationship between the distance and the distortion correction coefficient to obtain a depth information image after distortion correction includes: and determining the distortion correction coefficient of each pixel in the second depth information image according to the distance between each pixel in the second depth information image and the distortion center pixel and the relation between the distance and the distortion correction coefficient, and performing distortion correction on the second depth information image by adopting the determined distortion correction coefficient of each pixel to obtain the depth information image after distortion correction.
Optionally, the determining, according to the distance between each pixel in the second depth information image and the distortion center pixel, and by combining the relationship between the distance and the distortion correction coefficient, the distortion correction coefficient of each pixel in the second depth information image, and performing distortion correction on the second depth information image by using the determined distortion correction coefficient of each pixel to obtain a depth information image after distortion correction, includes: traversing all pixels in the second depth information image, and determining the distortion correction coefficient of each pixel according to the region where each pixel is located and the relationship between the distance corresponding to the located region and the distortion correction coefficient; calculating a quotient of the depth information value of each pixel and the corresponding distortion correction coefficient, and taking the calculated quotient as the depth information value after distortion correction of each pixel; and obtaining the depth information image after distortion correction based on the depth information value after each pixel correction.
Optionally, the determining, according to the area where each pixel is located, the distortion correction coefficient of each pixel by combining the relationship between the distance corresponding to the located area and the distortion correction coefficient includes any one of: if the distance between a pixel and the distortion center pixel is the same in the distances of M lengths, taking the distortion correction coefficient corresponding to the same length as the distortion correction coefficient corresponding to the pixel; if the distance between a pixel and the distortion center pixel is between a first distance and a second distance, obtaining a distortion correction coefficient corresponding to the pixel by adopting a linear interpolation mode according to the distortion correction coefficient corresponding to the first distance and the distortion correction coefficient corresponding to the second distance, wherein the M lengths of the distances include the first distance and the second distance, the M lengths of the distances are obtained by averaging the maximum distances between the pixel in the first depth information image and the distortion center pixel into M parts, and M is a positive integer greater than or equal to 2.
Optionally, when the mask image is used to represent the relationship between the distance and the distortion correction coefficient, the distortion correction is performed on the second depth information image according to the distance and the distortion correction coefficient to obtain a depth information image after distortion correction, including: and fusing the second depth information image and the mask image to obtain a fused image, wherein the fused image is the corrected depth information image.
Optionally, the second depth information image is obtained by the following method: acquiring a first actual scene image and a second actual scene image which are subsequently shot by the multi-camera module, wherein the first actual scene image is shot by a first camera, and the second actual scene image is shot by a second camera; performing stereo correction on the first actual scene image and the second actual scene image by using the reference calibration parameters, so that polar lines corresponding to the first actual scene image and the second actual scene image are horizontally aligned, and obtaining a first actual corrected image and a second actual corrected image after stereo correction; and performing stereo matching on the first actual correction image and the second actual correction image, calculating to obtain a depth information value of each pixel according to the deviation of the position of each pixel in the first actual correction image and the position of the pixel at the corresponding position in the second actual correction image, and obtaining the second depth information image.
The embodiment of the invention also provides a calibration device for the multi-camera module, which comprises: the system comprises a plane image acquisition unit, a plane image acquisition unit and a plane image calibration unit, wherein the plane image acquisition unit is used for acquiring a plane image, and the plane image is obtained by shooting a plane object by a multi-camera module to be calibrated; the first depth information image determining unit is used for performing stereo correction and stereo matching on the planar image by adopting reference calibration parameters corresponding to the multiple camera modules to obtain a first depth information image, wherein the first depth information image comprises depth information values of all pixels, and the depth information values are used for representing the distances between the planar object and the multiple camera modules; and the calibration unit is used for establishing a relation between the distance and a distortion correction coefficient according to the distance between each pixel in the first depth information image and a distortion center pixel and the depth information value of each pixel, wherein the distortion correction coefficient is used for correcting the depth information value of the pixel, and the distance and the distortion correction coefficient are used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module.
An embodiment of the present invention further provides a distortion correction apparatus for a depth information image, including: the system comprises an actual scene image acquisition unit, a multi-camera module and a camera module, wherein the actual scene image acquisition unit is used for acquiring an actual scene image which is obtained by shooting by adopting a calibrated multi-camera module; the second depth information image determining unit is used for performing stereo correction and stereo matching on the actual scene image by adopting the reference calibration parameters corresponding to the multiple camera modules to obtain a second depth information image; the distortion correction unit is used for acquiring the relation between the distance and a distortion correction coefficient, and carrying out distortion correction on the second depth information image according to the relation between the distance and the distortion correction coefficient to obtain a depth information image after the distortion correction; the relationship between the distance and the distortion correction coefficient is obtained by calibrating the multi-camera module by using the calibration method for the multi-camera module provided by any one of the embodiments.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform any one of the steps of the calibration method for a multi-camera module or any one of the steps of the distortion correction method for a depth information image.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory is stored with a computer program capable of running on the processor, and the processor executes the steps of any one of the calibration methods of the multi-camera module or the distortion correction method of any one of the depth information images when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, a planar image is obtained, and the planar image is subjected to stereo correction and stereo matching by adopting reference calibration parameters corresponding to the multiple camera modules to be calibrated, so that a first depth information image is obtained. And establishing a relation between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel. And the relation between the distance and the distortion correction coefficient is used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module. By adopting the calibration method of the multi-camera module provided by the embodiment of the invention, the planar image obtained by shooting the planar object by the multi-camera module to be calibrated can be corrected without a specific calibration environment or a specific calibration board, the planar image of any planar object with texture can be shot, and the operation is simple and easy to realize. If the multi-camera module has no distortion, all depth information values in the first depth information image of the plane object should be the same, so that the relationship between the distance and the distortion correction coefficient can be established based on the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel, the distortion condition of the multi-camera module after being calibrated by adopting the reference calibration parameter can be better reflected, and the calibration effect of the multi-camera module is improved.
In addition, when distortion correction is performed on the second depth information image subsequently obtained by the multi-camera module based on the established relation between the distance and the distortion correction coefficient, the distortion correction effect on the depth information image obtained by the multi-camera module can be improved.
Drawings
Fig. 1 is a flowchart of a calibration method for a multi-camera module according to an embodiment of the present invention;
FIG. 2 is a flow diagram of one embodiment of step 103 of FIG. 1;
FIG. 3 is a flow diagram of one embodiment of step 22 of FIG. 2;
FIG. 4 is a flow diagram of another embodiment of step 103 of FIG. 1;
FIG. 5 is a flow diagram of yet another embodiment of step 103 of FIG. 1;
fig. 6 is a flowchart of a distortion correction method for a depth information image according to an embodiment of the present invention;
FIG. 7 is a flow diagram of an exemplary application scenario in an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a calibration apparatus for a multi-camera module in an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a distortion correction apparatus for a depth information image in an embodiment of the present invention.
Detailed Description
As described above, a common depth map generation method has stereo matching algorithms including a local stereo matching algorithm, a global stereo matching algorithm, a semi-global stereo matching algorithm based on a conventional method, and a deep learning algorithm based on artificial intelligence. Regardless of the depth map generation algorithm, the mainstream algorithm is based on binocular camera calibration to obtain the internal parameters of each camera and the relative external parameters between the cameras.
The stereo correction depends on the calibration effect of the binocular cameras, the binocular calibration obtains the internal parameters of each camera and the relative external parameters of the two cameras, the calibration is generally completed on a module factory production line, a specific calibration plate is shot through a specific environment, and the good calibration effect is completed through a specific calibration algorithm. However, in the process of using the dual-camera device by a user, when the module is damaged, disassembled and maintained, and frequently falls off, the original dual-camera calibration data in the device will fail, so that the three-dimensional application function (such as dual-camera blurring) using the depth map is abnormal. To restore these functions, it is necessary to replace the module or to recalibrate the module without damage.
For the after-market point, the following are common after-market calibration schemes: the first calibration scheme is to set a calibration plate for calibrating the production line at an after-sale point and calibrate the camera module again. The second calibration scheme is to set a simple calibration board at an after-sale point to calibrate the camera module, but the calibration precision is low. The third calibration scheme is to directly replace the module from an after-sale point and start the golden calibration parameters for calibration.
Aiming at the first calibration scheme, the calibration effect can be comparable to that of a production line calibration, but the production line calibration environment is arranged at an after-sale point, so that the calibration cost is high. Aiming at the second calibration scheme, only a simple calibration plate needs to be arranged, but the calibration precision is not high, the calibration plate needs to be specially set, the cost is high, and the calibration effect is poor. Because the first calibration scheme and the second calibration scheme both need to arrange a specific calibration board at the after-sale point, great inconvenience is brought to the after-sale point, and for the after-sale point, the conditions of the production line calibration scheme are not always available. Therefore, after-sale points often adopt a third calibration scheme, the camera module is directly replaced, the gold calibration parameters are started, a specific calibration plate is not needed for calibration, and the method is simple. However, since the calibration parameters of gold are from a certain number of the camera modules in the batch, the specificity of a single camera module cannot be matched, and if the distortion parameters are not matched, the calibration accuracy is not high. Distortion correction is incorrect or incomplete, parallax errors of pixels at four corners can be shown on a parallax map, and great influence is brought to subsequent three-dimensional application such as image background blurring or three-dimensional reconstruction.
In the prior art, a distortion processing method based on a defect disparity map is used for processing the disparity map so as to perform distortion correction, and common post-processing algorithms such as filtering algorithms (such as gaussian filtering, bilateral filtering, guided filtering and the like), morphological methods, semantic segmentation and the like improve the accuracy of the disparity map to a certain extent. However, since the correct point and the wrong point of the depth map cannot be distinguished, especially in the case of a complex background, the distortion of the four corners cannot be improved, and the wrong depth map is diffused. And the post-processing technology based on deep learning not only is difficult to obtain a data set, but also has high model calculation complexity. In summary, the after-sale calibration method for the camera module cannot consider both the cost and the calibration effect.
In order to solve the above problem, an embodiment of the present invention provides a calibration method for a multi-camera module, which specifically includes: and acquiring a planar image, and performing stereo correction and stereo matching on the planar image by adopting reference calibration parameters corresponding to the multiple camera modules to be calibrated to obtain a first depth information image. And establishing a relation between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel. And the relation between the distance and the distortion correction coefficient is used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module.
By adopting the calibration method of the multi-camera module provided by the embodiment of the invention, the planar image obtained by shooting the planar object by the multi-camera module to be calibrated can be corrected without a specific calibration environment or a specific calibration plate, the planar image of the planar object with any texture and the texture capable of filling the whole picture can be shot, and the operation is simple and easy to realize. If the multi-camera module has no distortion, all depth information values in the first depth information image of the plane object should be the same, so that the relationship between the distance and the distortion correction coefficient can be established based on the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel, the distortion condition of the multi-camera module after being calibrated by reference calibration parameters can be better reflected, the calibration effect of the multi-camera module is improved, and the calibration cost can be reduced.
In addition, when distortion correction is performed on the second depth information image subsequently obtained by the multi-camera module based on the established relation between the distance and the distortion correction coefficient, the distortion correction effect on the depth information image obtained by the multi-camera module can be improved.
In order to make the aforementioned objects, features and advantages of the embodiments of the present invention more comprehensible, specific embodiments accompanied with figures are described in detail below.
The embodiment of the invention provides a calibration method of a multi-camera module, which can be used for calibrating the multi-camera module so as to correct the distortion of the multi-camera module. The calibration method can be executed by the terminal, and can also be executed by a chip, a chip module and the like in the terminal.
Specifically, referring to fig. 1, a flowchart of a calibration method for a multi-camera module in the embodiment of the present invention is given, and the calibration method for a multi-camera module specifically may include the following steps:
101, acquiring a planar image, wherein the planar image is obtained by shooting a planar object by a multi-camera module to be calibrated;
102, performing stereo correction and stereo matching on the planar image by using reference calibration parameters corresponding to the multiple camera modules to obtain a first depth information image, wherein the first depth information image comprises depth information values of all pixels, and the depth information values are used for representing distances between the planar object and the multiple camera modules;
103, establishing a relationship between the distance and a distortion correction coefficient according to the distance between each pixel in the first depth information image and a distortion center pixel and the depth information value of each pixel, wherein the distortion correction coefficient is used for correcting the depth information value of the pixel, and the relationship between the distance and the distortion correction coefficient is used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module.
In a specific implementation, the image center of the first depth information image may be selected as a distortion center, and in this case, the image center pixel of the first depth information image is used as a distortion center pixel. It is understood that any pixel of the first depth information image near the center of the image and within the preset range area may also be selected as the distortion center pixel. The distortion center pixel may also be determined from the depth information values of the pixels of the first depth information image, for example, a pixel corresponding to the center of a region where the change in the depth information value of the pixel is smaller than a set threshold value or the depth information values of the pixels are almost the same may be set as the distortion center pixel.
Further, after the relationship of the distance and the distortion correction coefficient is obtained, the obtained relationship of the distance and the distortion correction coefficient may be stored. E.g., in the memory of the multi-camera module, or in the memory of the terminal containing the multi-camera module.
In a specific implementation, the planar image obtained in step 101 may include a first planar image and a second planar image. The first plane image is obtained by shooting through a first camera, and the second plane image is obtained by shooting through a second camera. The first planar image and the second planar image may be photographed in response to a photographing instruction. The multi-camera module at least comprises the first camera and the second camera, wherein the first camera can also be called as a main camera, and the second camera can be called as an auxiliary camera. For example, the terminal including the multi-camera module is provided with camera application software, when a photographing key is operated, a photographing instruction can be generated, the first camera responds to the photographing instruction to photograph to obtain a first plane image, and the second camera responds to the photographing instruction to photograph to obtain a second plane image.
It can be understood that the multi-camera module can include two cameras, and can also include three or four cameras with more numbers. When many camera modules include a plurality of cameras, and have multiple compound mode among the plurality of cameras and obtain the depth information image, can divide into groups a plurality of cameras, every group includes two cameras. The two cameras in each group are respectively marked as a first camera and a second camera, the two cameras in each group are calibrated by adopting the calibration method of the camera module provided by the embodiment of the invention to obtain the relationship between the distance corresponding to each group and the distortion correction coefficient, and the association between each group and the relationship between the corresponding distance and the distortion correction coefficient is established so as to facilitate searching in subsequent use.
When the plane object is shot by the multiple camera modules, the multiple camera modules are parallel to the plane object. In other words, the optical axis direction of the multi-camera module is perpendicular to the plane corresponding to the planar object. The distance between the multi-camera module and the plane object can be 50 cm to 80 cm, the plane object is filled with the whole picture, and the plane image of the plane object is shot after focusing is finished to obtain a first plane image and a second plane image. It can be understood that the distance between the multi-camera module and the planar object can be adjusted according to the size of the planar object.
The planar object may be a plan view such as a planar poster or the like. When the planar poster is a planar poster, the planar poster can be vertically pasted on a wall or a planar support and the like, the planar poster can also be horizontally placed on a horizontal ground, and the texture of the planar object can be filled in the whole picture only when the multi-camera module shoots the planar object at a distance of 50 cm to 80 cm from the planar object and in a condition of keeping parallel with the planar object, and the embodiment of the invention does not limit patterns on the planar object and the like.
In an implementation of step 102, after obtaining the first planar image and the second planar image, stereo-correcting the first planar image and the second planar image by using the reference calibration parameter, that is, re-projecting the first planar image and the second planar image according to the reference calibration parameter, so that the first planar image and the second planar image are located on the same plane, and at the same time, the epipolar line of the first planar image is horizontally aligned with the corresponding epipolar line of the second planar image, so as to obtain a first corrected image and a second corrected image after stereo-correction.
Optical centers O and O' of the two cameras (which may also be referred to as cameras), i.e. the camera centers. The line connecting the optical centers is a base line, and the intersection point formed by the base line passing through two image planes (images shot by the camera) is called a pole. The plane formed by the object point p and the two optical centers is called the polar plane. The intersection of a polar plane and two image planes is called the epipolar line. The image points in one image always correspond to epipolar lines in the other image, the epipolar lines in one image always map to image points in the other image, the epipolar lines always pass through the poles, different object points correspond to different epipolar lines, but all the epipolar lines in one image always intersect at the poles. Based on the epipolar constraint, the reference calibration parameters are adopted to carry out three-dimensional correction on the first plane image and the second plane image.
The reference calibration parameter may be a gold calibration parameter. The golden calibration parameters come from a certain number of multi-camera modules of the batch. The reference calibration parameters may be pre-stored in a memory in the multi-camera module. For example, the gold calibration parameter may be programmed as One Time Programmable (OTP) data in a register of the multi-camera module. For another example, the gold calibration parameter can also be programmed into an Electrically Erasable Programmable Read-Only Memory (EEPROM). The golden calibration parameters may include an internal reference and an external reference of the camera. The internal parameters of the camera include optical center, focal length, distortion and the like. The external reference of the camera refers to relative geometric parameters of the camera, including a rotation matrix, a translation matrix and the like.
Performing stereo matching on the first correction image and the second correction image, and calculating a depth information value of each pixel according to the deviation of each pixel in the first correction image and the position of the pixel at the corresponding position in the second correction image; the first depth information image is derived based on a depth information value of each pixel.
Stereo matching is a key part of stereo vision research, and aims to match corresponding pixel points in two or more viewpoints and calculate parallax. Taking binocular stereo matching as an example, assuming that the input first plane image and second plane image satisfy a row alignment constraint, that is, imaging points of the same scene point on the first plane image and the second plane image are on the same row, then taking a point of a left image (such as the first plane image) as a reference point, and searching a corresponding feature point in a right image (such as the second plane image) based on the feature expression thereof, thereby calculating a parallax value of the scene point, that is, obtaining a first depth information image.
The size of the first depth information image is the same as the size of the first planar image and the second planar image.
The first depth information image may be a depth map or a disparity map.
The depth map is an image representing actual distance information between the viewpoint and the camera. The depth information value is a depth value at this time. Each position of the depth map stores the depth value of the pixel of the position, and the depth value is the Z coordinate value under the camera coordinate system.
The disparity map is an image formed by calculating positional deviation between corresponding points of the image based on a stereo matching algorithm. At this time, the depth information value is a disparity value. The parallax value is in inverse proportion to the depth of the object, and the larger the parallax value is, the closer the distance between the object and the multi-camera module is, and the smaller the depth is. Accordingly, the smaller the parallax value is, the farther the object is from the multi-camera module, the greater the depth is. The disparity map stores the disparity value of the pixel at each position in pixel unit. The disparity value d is equal to the column coordinate (also called ordinate) of the same-name pixel pair in the left view minus the column coordinate in the right view, in pixel units. The image points with the same name are also called corresponding image points or image points with the same name, and refer to image construction points of any target point on different photos (also called images). Specifically, taking the first plane image as the left view and the second plane image as the right view, the disparity map of the first plane image is taken as an example, and the disparity value at the pixel position p is equal to the column coordinate of the matching point of the pixel on the first plane image minus the column coordinate on the second plane image.
Further, after the first depth information image is obtained, the first depth information image may be preprocessed. Wherein the pretreatment comprises at least one of the following: denoising processing, filtering processing and hole filling processing.
The first depth information image is subjected to denoising processing, so that scattered noise in the first depth information image can be removed. The first depth information image is subjected to denoising processing, so that the smoothness of the obtained first depth information image is improved, and the accuracy of the relation between the distance obtained subsequently and the distortion correction coefficient is ensured. The first depth information image may be denoised by a low-pass filtering method, for example, the first depth information image may be denoised by gaussian filtering.
The filtering process of the first depth information image may include performing a low-pass filtering process, and the filtering process may remove high-frequency information in the first depth information image, so that the filtered first depth information image has better smoothness, for example, so that an edge smoothing effect of the first depth information image is better. The filtering process may include bilateral filtering, guided filtering, gaussian filtering, and the like.
By performing hole filling processing on the first depth information image, pixels which are not matched with the depth information value in the first depth information image can be processed, so that the accuracy and the smoothness of the obtained first depth information image are improved. In determining holes, it may be determined which pixels are holes from the average depth information value of all pixels in the first depth information image. For pixels with depth information values less than or equal to a preset percentage of the average depth information value, holes can be determined. When the hole is filled, the depth information value of the hole can be determined according to the depth information value of the pixel in the adjacent area of the hole so as to fill the hole. For example, the average value of the depth information values of the pixels in the vicinity of the hole is used as the depth information value of the hole; for another example, the depth information values of the pixels in the vicinity of the hole may be weighted, and the result obtained by the weighting may be used as the depth information value of the hole.
Step 103 may be implemented in a variety of ways, and in one non-limiting embodiment, referring to fig. 2, a flow chart of a specific implementation of step 103 in fig. 1 is given, and step 103 may include the following steps:
and step 21, performing area division on the first depth information image to obtain a plurality of areas.
In a specific implementation, the first depth information image may be divided into regions according to depth information values of respective pixels in the first depth information image.
In a specific implementation, the first depth information image may be subjected to region division by using at least one of the following region division methods to obtain a plurality of regions.
In some embodiments, the first depth information image is divided into regions using one or more preset radii with the distortion center pixel as a center.
Further, the preset radius may be determined according to a depth information value of each pixel in the first depth information image. When a plurality of preset radiuses exist, the change amount of the depth information value of the pixel in the area corresponding to the preset radius with a smaller value is smaller, and the change amount of the depth information value of the pixel in the area corresponding to the larger preset radius is larger. If the radius R1 and the radius R2 are adopted to divide the area of the first depth information image to respectively obtain a first area and a second area, wherein R1 is more than 0 and R2 is more than 2. The first area corresponds to the radius R1, and depth information values of pixels in the first area are almost the same or the variation is smaller than the first threshold. The second area corresponds to the radius R2, and the absolute value of the variation of the depth information value of the pixels in the second area compared with the average depth value of the pixels in the first area is greater than or equal to the first threshold, that is, there are more pixels in the second area with distortion. Because the correction of distortion is inaccurate or incomplete, high brightness or low dark value appears in the four corners region of the first depth information image, and these pixels with errors are radially distributed, the distortion center pixel is taken as the center, and one or more preset radiuses are adopted to carry out region division on the first depth information image, so that the distribution rule of the distorted pixels can be well considered, and the accuracy of the relation between the obtained distance and the distortion correction coefficient is improved.
In other embodiments, the distortion center pixel is used as an origin, and the first depth information image is divided into four quadrants by a cross-shaped area division manner. Respectively denoted as a first quadrant, a second quadrant, a third quadrant, and a fourth quadrant.
And step 22, aiming at each area, establishing a relation between the distance corresponding to each area and the distortion correction coefficient according to the distance between each pixel in each area and the distortion center pixel and the depth information value of each pixel.
Referring to fig. 3, which shows a flowchart of an embodiment of step 22 in fig. 2, step 22 may specifically include the following steps:
and 31, calculating the maximum distance between the pixels in each region and the distortion center pixel for each region, and segmenting the maximum distance to obtain a plurality of distance intervals.
In a specific implementation, a distance division point may be determined according to a variation amount of the depth information value of the pixels within each region in a direction from the distortion center pixel to a pixel corresponding to the maximum distance, and the maximum distance may be divided into a plurality of distance sections based on the distance division point.
And taking pixels in N-degree areas on the left and right sides of a connecting line on the connecting line from the distortion center pixel to the pixel corresponding to the maximum distance, and recording the depth information value of each pixel in the N-degree areas and the distance from each loudness to the distortion center pixel. Therefore, the accuracy can be considered by taking the pixels in the N-degree region, and the calculation amount can be reduced while the robustness is good.
And step 32, fitting the distance between the pixel and the distortion center pixel and the depth information value of the pixel to each pixel of which the distance between the pixel and the distortion center pixel belongs to each distance interval to obtain the relation between the distance corresponding to each area and the distortion correction coefficient.
In some non-limiting embodiments, for a first distance interval in which the change amount of the depth information value is not greater than a set threshold, a linear fit is performed on the distance from each pixel to the distortion center pixel and the depth information value of each pixel within the first distance interval.
For a first distance section in which the change amount of the depth information value is not greater than a set threshold value, it can be considered that there is substantially no distortion that the position of the pixel in the first distance section is close to the distortion center pixel. And performing linear fitting on the distance between the pixel in the first distance interval and the distortion center pixel and the depth information value of each pixel, wherein the obtained fitting result is almost a horizontal straight line. As a non-limiting example, the distortion correction coefficient of the pixel corresponding to the first distance section may take 1.
In other non-limiting embodiments, for a second distance interval in which the variation of the depth information value is greater than a set threshold, a polynomial fitting or a spline fitting is performed on the distance between each pixel and the distortion center pixel and the depth information value of each pixel in the second distance interval.
Specifically, calculating the mean value of the depth information values of part or all of the pixels in the first distance interval; calculating a quotient of the depth information value of each pixel belonging to the second distance interval and the mean value by taking the mean value as a reference value, and taking the calculated quotient as a depth information value error rate of each pixel; and performing polynomial fitting or spline fitting on the distance between each pixel in the second distance interval and the distortion center pixel and the error rate of the depth information value of each pixel, and establishing the relation between the distance corresponding to the second distance interval and the distortion correction coefficient. The depth information value error rate is used to characterize the error rate of the depth information value of each pixel due to distortion.
The first range of distances may be [0,DistMax/3], and the second range of distances may be (DistMax/3,DistMax ]. Where DistMax is the maximum distance for each zone and DistMax/3 is 1/3 of the maximum distance.
When a cross-shaped area division manner is adopted to divide the first depth information image into four quadrants, namely a first quadrant, a second quadrant, a third quadrant and a fourth quadrant, the above steps 31 and 32 may be respectively adopted for each quadrant to obtain a relationship between a distance corresponding to each quadrant and a distortion correction coefficient. At this time, the maximum distance of each quadrant is the distance between the vertex of the first depth information image located in the quadrant and the distortion center pixel.
The relationship between the distance and the distortion correction coefficient may be expressed in a functional manner, a mapping table manner, a mask image manner, or the like.
In another non-limiting example, referring to fig. 4, which shows a flow chart of another specific implementation of step 103 in fig. 1, step 103 may include the following steps 41 to 43:
and step 41, fitting to obtain a function of the distortion correction coefficient and the distance according to the distance between each pixel and the distortion center pixel in the first depth information image and the depth information value of each pixel.
And 42, equally dividing the maximum distance between the pixel in the first depth information image and the distortion center pixel into M parts to obtain M distances in length.
Wherein M is a positive integer greater than or equal to 2.
And 43, substituting the distances with the lengths of M into the function, respectively calculating distortion correction coefficients corresponding to the distances with the lengths of M, and establishing a relation between the distances and the distortion correction coefficients based on the distortion correction coefficients corresponding to the distances with the lengths of M. At this time, the relationship between the distance and the distortion correction coefficient may be represented in the form of a mapping table. And when the relation between the distance and the distortion correction coefficient is used, according to the distance between the pixel in the second depth information image and the distortion center pixel, searching the distortion correction coefficient corresponding to the distance from the mapping table.
It is understood that the above steps 41 to 43 may also be used to implement step 32.
In another non-limiting example, referring to fig. 5, which shows a flow chart of another specific implementation of step 103 in fig. 1, step 103 may include the following steps 51 to 52:
step 51, obtaining a distortion correction coefficient of each pixel according to the distance between each pixel in the first depth information image and a distortion center pixel and the depth information value of each pixel;
and step 52, obtaining a mask image based on the distortion correction coefficient of each pixel, wherein the mask image comprises the distortion correction coefficient of each pixel in the first depth information image, and the mask image is used for representing the relation between the distance and the distortion correction coefficient.
Further, the calibration method of the multi-camera module further comprises the following steps: before establishing a relation between a distance and a distortion correction coefficient according to the distance between each pixel and a distortion center pixel in the first depth information image and the depth information value of each pixel, judging whether the depth information value in the first depth information image meets a correction condition, wherein the correction condition comprises at least one of the following conditions: the deviation between the maximum depth information value and the minimum depth information value in the first depth information image is greater than a set deviation, and the number of pixels with depth information values greater than a set threshold in the first depth information image is greater than a set number; and if the correction condition is met, establishing a relation between the distance and the distortion correction coefficient according to the distance between each pixel and the distortion center pixel in the first depth information image and the depth information value of each pixel. Correspondingly, if the correction condition is not met, the multi-camera module is represented without correction, the adaptability of the multi-camera module with reference to the calibration parameters is good, and the distortion of the shot depth information image is small or distortion-free. Therefore, the calibration efficiency of the multi-camera module can be improved.
According to the scheme, the planar image is obtained, and the reference calibration parameters corresponding to the multiple camera modules to be calibrated are adopted to perform three-dimensional correction and three-dimensional matching on the planar image, so that the first depth information image is obtained. And establishing a relation between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel. And the relation between the distance and the distortion correction coefficient is used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module. By adopting the calibration method of the multi-camera module provided by the embodiment of the invention, the planar image obtained by shooting the planar object by the multi-camera module to be calibrated can be corrected without a specific calibration environment or a specific calibration plate, the planar image of the planar object which has any texture and the texture of which can fill the whole picture can be shot, and the operation is simple and easy to realize. If the multi-camera module has no distortion, all depth information values in the first depth information image of the plane object should be the same, so that the relationship between the distance and the distortion correction coefficient can be established based on the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel, the distortion condition of the multi-camera module after being calibrated by adopting the reference calibration parameter can be better reflected, and the calibration effect of the multi-camera module is improved.
In addition, when distortion correction is performed on the second depth information image subsequently obtained by the multi-camera module based on the established relation between the distance and the distortion correction coefficient, the distortion correction effect on the depth information image obtained by the multi-camera module can be improved.
The embodiment of the invention also provides a distortion correction method of the depth information image, which can be used for carrying out distortion correction on the depth information image. The distortion correction method may be executed by the terminal, and may also be executed by a chip, a chip module, or the like in the terminal.
Referring to fig. 6, a flowchart of a distortion correction method for a depth information image in an embodiment of the present invention is shown, where the distortion correction method for a depth information image may specifically include the following steps:
step 61, acquiring an actual scene image, wherein the actual scene image is obtained by shooting through a calibrated multi-camera module;
step 62, performing stereo correction and stereo matching on the actual scene image by using reference calibration parameters corresponding to the multiple camera modules to obtain a second depth information image;
and 63, acquiring the distance and the distortion correction coefficient, and carrying out distortion correction on the second depth information image according to the distance and the distortion correction coefficient to obtain a depth information image after distortion correction.
In a specific implementation, the relationship between the distance and the distortion correction coefficient may be obtained by calibrating the multi-camera module by using the calibration method for a multi-camera module provided in any of the above embodiments. The specific scheme for calibrating the multiple camera modules to obtain the relationship between the distance and the distortion correction coefficient may be combined with the descriptions in fig. 1 to 5 and the calibration method for the multiple camera modules provided in any of the above embodiments, and details are not repeated here.
Generally, after calibrating a plurality of camera modules by using reference calibration parameters, if the reference calibration parameters are well matched with the plurality of camera modules, depth information values of pixels in a first depth information image obtained by shooting a planar object based on the plurality of camera modules should be basically the same, and if the depth information values of the pixels are different from or have a large difference with other depth information values, distortion still exists after the plurality of camera modules are calibrated by using the reference calibration parameters. The relation between the distance and the distortion correction coefficient is obtained based on the first depth information image, the distortion condition of the multiple camera modules can be better reflected, the pixel positions of the depth information images obtained by the multiple camera modules with distortion can be reflected, then the second depth information images obtained by the multiple camera modules subsequently are subjected to distortion correction according to the relation between the distance and the distortion correction coefficient, the distortion of the second depth information images can be effectively eliminated or reduced, and the effectiveness and the accuracy of the corrected depth information images are improved.
In step 61, acquiring the actual scene image may include acquiring a first actual scene image and a second actual scene image. The first actual scene image is shot by the first camera, and the second actual scene image is shot by the second camera. When the number of the cameras included by the multi-camera module is three or four or other more numbers, depth information images can be obtained by combining any two cameras, or when the cameras for obtaining the depth information images have multiple combination modes, when the multi-camera module is calibrated, every two cameras can be used as one group for calibration, and the relation between the distance corresponding to each group and the distortion correction coefficient is obtained. And when distortion correction is required to be carried out on the second depth information image obtained by the multi-camera module subsequently, selecting the relation between the corresponding distance and the distortion correction coefficient according to the camera adopted for shooting the actual scene image.
Specifically, when the relationship between the distance corresponding to the camera of each group and the distortion correction coefficient is obtained, an association relationship (or mapping relationship) between each group and the relationship between the corresponding distance and the distortion correction coefficient may be established, so as to subsequently search for the relationship between the distance corresponding to the group and the distortion correction coefficient according to the group. And distortion correction is carried out on the second depth information image obtained by grouping by adopting the relation between the distance corresponding to the grouping and the distortion correction coefficient.
In step 62, the second depth information image may be obtained as follows. Specifically, after a first actual scene image and a second actual scene image are acquired, the reference calibration parameters are adopted to perform stereo correction on the first actual scene image and the second actual scene image, so that polar lines corresponding to the first actual scene image and the second actual scene image are horizontally aligned, and a first actual corrected image and a second actual corrected image after stereo correction are obtained; and performing stereo matching on the first actual correction image and the second actual correction image, calculating to obtain a depth information value of each pixel according to the deviation of the position of each pixel in the first actual correction image and the position of a pixel at a corresponding position in the second actual correction image, and obtaining the second depth information image. The reference calibration parameter may be a gold calibration parameter.
In one non-limiting embodiment, step 63 may be implemented as follows: and determining the distortion correction coefficient of each pixel in the second depth information image according to the distance between each pixel in the second depth information image and the distortion center pixel and the relation between the distance and the distortion correction coefficient, and performing distortion correction on the second depth information image by adopting the determined distortion correction coefficient of each pixel to obtain the depth information image after distortion correction.
Specifically, all pixels in the second depth information image may be traversed, and the distortion correction coefficient of each pixel is determined according to the region where each pixel is located and by combining the relationship between the distance corresponding to the located region and the distortion correction coefficient; calculating the quotient of the depth information value of each pixel and the corresponding distortion correction coefficient, and taking the calculated quotient as the depth information value after distortion correction of each pixel; and obtaining the depth information image after distortion correction based on the depth information value after each pixel correction.
Further, regarding determining the distortion correction coefficient of each pixel according to the region where each pixel is located and by combining the relationship between the distance corresponding to the located region and the distortion correction coefficient, any one of the following determination methods is included.
For example, if the distance between a pixel and the distortion center pixel is the same for one of the M lengths, the distortion correction coefficient corresponding to the same length is set as the distortion correction coefficient corresponding to the pixel.
For another example, if the distance between a pixel and the distortion center pixel is between a first distance and a second distance, a linear interpolation method is used to obtain a distortion correction coefficient corresponding to the pixel according to a distortion correction coefficient corresponding to the first distance and a distortion correction coefficient corresponding to the second distance, where the distances of M lengths include the first distance and the second distance, the distances of M lengths are obtained by averaging the maximum distances between the pixels in the first depth information image and the distortion center pixel into M parts, and M is a positive integer greater than or equal to 2.
Specifically, the distances of M lengths may be represented by M indices, the distance between the pixel and the distortion center pixel is divided by the maximum distance to obtain a first result, and the first result is multiplied by M to obtain a second result. And determining a first index and a second index on the left side and the right side of the second result. And obtaining the distortion correction coefficient corresponding to the second result, namely obtaining the distortion correction coefficient of the pixel, in a linear interpolation mode according to the distortion correction coefficients corresponding to the first index and the second index respectively.
If the distortion correction coefficient is obtained by the following method: calculating the mean value of the depth information values of part or all of the pixels in the first distance interval; calculating a quotient of the depth information value of each pixel belonging to the second distance interval and the average value by taking the average value as a reference value, and taking the quotient obtained through calculation as an error rate of the depth information value of each pixel; and performing polynomial fitting or spline fitting on the distance between each pixel in the second distance interval and the distortion center pixel and the error rate of the depth information value of each pixel, and establishing the relation between the distance corresponding to the second distance interval and the distortion correction coefficient. The depth information value error rate is used to characterize an error rate of the depth information value of each pixel due to distortion. The depth information value of the pixel may then be divided by the distortion correction coefficient of the pixel to obtain a distortion corrected depth information value.
If the distortion correction coefficient is obtained by the following method: calculating the mean value of the depth information values of part or all of the pixels in the first distance interval; taking the mean value as a reference value, calculating a quotient of the mean value and the depth information value of each pixel belonging to the second distance interval, and taking the quotient obtained through calculation as the error rate of the depth information value of each pixel; and performing polynomial fitting or spline fitting on the distance between each pixel in the second distance interval and the distortion center pixel and the error rate of the depth information value of each pixel, and establishing the relation between the distance corresponding to the second distance interval and the distortion correction coefficient. The depth information value error rate is used to characterize the error rate of the depth information value of each pixel due to distortion. The depth information value of the pixel may then be multiplied by the distortion correction coefficient of the pixel to obtain a distortion corrected depth information value.
In other non-limiting embodiments, if the mask image is used to characterize the relationship between the distance and the distortion correction coefficient, the step 63 can be implemented as follows: and fusing the second depth information image and the mask image to obtain a fused image, wherein the fused image is the corrected depth information image. The size of the mask image is the same as the size of the second depth information image. The mask image is recorded with distortion correction coefficients for each pixel.
And fusing the mask image and the second depth information image, namely performing distortion correction on the depth information value of the pixel at the corresponding position in the second depth information image by using the distortion correction coefficient of each pixel in the mask image, for example, dividing or multiplying the depth information value of each pixel in the second depth information image by the corresponding distortion correction coefficient in the mask image to obtain the depth information value after distortion correction of each pixel, and further obtaining the corrected depth information image according to the depth information value after distortion correction of each pixel.
For example, when calculating the distortion correction coefficient, if the distortion correction coefficient is obtained based on the quotient of the depth information value of each pixel and the reference value, the depth information value of each pixel in the second depth information image is divided by the corresponding distortion correction coefficient in the mask image, and the depth information value after distortion correction of each pixel is obtained.
For example, when the distortion correction coefficient is calculated and the distortion correction coefficient is obtained based on the quotient of the reference value and the depth information value of each pixel, the depth information value of each pixel in the second depth information image is multiplied by the corresponding distortion correction coefficient in the mask image to obtain the depth information value after distortion correction of each pixel.
Further, after obtaining the second depth information image or after obtaining the distortion-corrected depth information image, at least one of the following processes may be performed on the second depth information image or the distortion-corrected depth information image: image denoising processing, filtering processing and hole filling processing.
The specific implementation manner of the image denoising process, the filtering process, or the hole filling process is similar to the process of the first depth information image, and reference may be made to the related description of the image denoising process, the filtering process, or the hole filling process in the foregoing embodiments, and details are not repeated here.
For a specific process of obtaining the mask image, reference may be made to the description of the mask image portion in the calibration method for a multi-camera module provided in the foregoing embodiment, and details are not repeated here.
In order to facilitate better understanding and implementation of the embodiments of the present invention, a specific implementation of the calibration method of the multi-camera module and the distortion correction method of the depth information image is described below with reference to a flowchart of a typical application scenario in the embodiments of the present invention shown in fig. 7. Taking a disparity map as an example of a depth information image, the method specifically comprises the following steps:
step 701, a first plane image and a second plane image are obtained.
Step 702, performing stereo correction on the first plane image and the second plane image by using the golden calibration parameter to obtain a first corrected image and a second corrected image.
Step 703, performing stereo matching on the first corrected image and the second corrected image to obtain a first disparity map.
And step 704, dividing the four quadrants by taking the distorted central pixel of the first disparity map as a center in a cross region dividing mode.
Step 705, for each quadrant, a lookup table of the distance of each quadrant and the distortion correction coefficient is established according to the distance between the pixel of each quadrant and the distortion center correction pixel and the parallax value of each pixel.
Taking the first quadrant as an example, assuming that the distortion of the first disparity map is the image center, taking diagonal left and right N-degree sector area pixels, respectively, recording the disparity VALUE of each pixel and the distance from each pixel to the distortion center pixel, and obtaining a corresponding disparity VALUE vector VALUE and a distance vector DIST, wherein the maximum VALUE of the distance is the distance from the image center to the image vertex, and is recorded as DistMax. And dividing the distance vector DIST and the disparity vector VALUE into two sections for fitting. Wherein the distance vector DIST is an independent variable, the disparity VALUE vector VALUE is a dependent variable, and a function of the disparity VALUE VALUE and the distance vector DIST is established. 1/3 of DistMax is taken as a segmentation threshold. Regarding the first segment, namely the first 1/3 maximum distance DistMax, namely [0, distMax/3], the pixels are located close to the distortion center and can be considered to have no distortion basically, the distance vector of the segment is marked as DIST0, the parallax VALUE vector is marked as VALUE0, the distance vector DIST0 and the parallax VALUE vector VALUE0 of the segment are subjected to linear fitting to form a horizontal straight Line almost, the horizontal straight Line is marked as Line0, and the Mean VALUE Mean0 of the Line0 is taken as a reference parallax VALUE. And a second stage: the last 2/3 distance, namely (DistMax/3, distMax ], the pixels are located far away from the distortion center, the distortion gradually increases, the distance vector of the segment is marked as DIST1, the disparity VALUE vector is marked as VALUE1, the disparity VALUE vector VALUE1 of the pixels is divided by the reference disparity VALUE Mean0 to obtain the depth information VALUE error rate caused by the distortion of each pixel, the distance vector DIST1 and the depth information VALUE error rate are fitted (which can be polynomial fitting, spline fitting and the like), and the fitting relation R0 of the distance and the distortion correction coefficient is established.
And uniformly dividing 0-DistMax into M parts, wherein the length of the lookup table is M, distortion correction coefficients of the first 1/3 distance [0, distMax/3] are all 1, the distortion correction coefficients of the last 2/3 distance (DistMax/3, distMax ] are obtained by calculating through a distance substitution fitting relation R0, and the establishment of the length-M distance and distortion correction coefficient lookup table is completed.
Step 706, a first actual scene image and a second actual scene image are obtained.
And 707, performing stereo correction on the first actual scene image and the second actual scene image by using the golden calibration parameter to obtain a first actual corrected image and a second actual corrected image.
Step 708, stereo matching is performed on the first actual corrected image and the second actual corrected image to obtain a second disparity map.
And 709, determining a quadrant where each pixel is located according to the position of each pixel in the second disparity map, acquiring a lookup table of the distance of the quadrant and a distortion correction coefficient, determining the distortion correction coefficient of the pixel according to the distance between the pixel and a distortion center pixel, and correcting the disparity value of the pixel by adopting the determined distortion correction coefficient.
For example, all pixels in the second disparity map are traversed, and the distance and distortion correction coefficient lookup table of the corresponding quadrant are selected according to the quadrant in which the pixels are located, so that distortion correction is performed. Taking the pixel to be corrected in the first quadrant as an example, calculating the distance d0 between the pixel and the center of the image, recording the distance from the upper right vertex to the center of the image as Maxd, where d0/Maxd M is the index of the floating point value in LUT0, taking the left and right integer indexes (two of 0 to M) of the floating point value, and obtaining the distortion correction coefficient eff0 of the pixel through linear interpolation; and dividing the parallax value of the pixel by the distortion correction coefficient eff0 of the pixel to obtain the parallax value subjected to distortion correction.
And step 710, obtaining a corrected disparity map.
In a specific implementation, specific implementations of the above steps may be combined with the descriptions in fig. 1 to fig. 6, and are not described herein again.
An embodiment of the present invention further provides a calibration apparatus for multiple camera modules, and referring to fig. 8, a schematic structural diagram of the calibration apparatus for multiple camera modules in the embodiment of the present invention is given, where the calibration apparatus 80 for multiple camera modules may include:
a planar image obtaining unit 81, configured to obtain a planar image, where the planar image is obtained by shooting a planar object by using a multi-camera module to be calibrated;
a first depth information image determining unit 82, configured to perform stereo correction and stereo matching on the planar image by using reference calibration parameters corresponding to the multiple camera modules to obtain a first depth information image, where the first depth information image includes a depth information value of each pixel, and the depth information value is used to represent a distance between the planar object and the multiple camera modules;
and the calibration unit 83 is configured to establish a relationship between a distance and a distortion correction coefficient according to a distance between each pixel in the first depth information image and a distortion center pixel and a depth information value of each pixel, where the distortion correction coefficient is used to correct the depth information value of the pixel, and the distance and the distortion correction coefficient are used to perform distortion correction on a second depth information image subsequently obtained by the multi-camera module.
In a specific implementation, the calibration device 80 of the multi-camera module may correspond to the calibration device 80 of the multi-camera module or a Chip having a calibration function in a terminal of the calibration device 80 including the multi-camera module, such as a System-On-a-Chip (SOC), a baseband Chip, and the like; or the calibration device corresponding to the multiple camera modules or the terminal of the calibration device containing the multiple camera modules comprises a chip module with a calibration function; or to a chip module having a data processing function chip, or to the calibration apparatus 80 of a multi-camera module or to a terminal of a calibration apparatus including a multi-camera module 80.
In a specific implementation, for a specific working principle and a working flow of the calibration apparatus 80 for multiple camera modules, reference may be made to the calibration method for multiple camera modules in the foregoing embodiment and the related descriptions in fig. 1 to 7, which are not described herein again.
An embodiment of the present invention further provides a distortion correction apparatus for a depth information image, and referring to fig. 9, a schematic structural diagram of the distortion correction apparatus for a depth information image in the embodiment of the present invention is provided, and the distortion correction apparatus 90 for a depth information image may include:
an actual scene image obtaining unit 91, configured to obtain an actual scene image, where the actual scene image is obtained by shooting with a calibrated multi-camera module;
a second depth information image determining unit 92, configured to perform stereo correction and stereo matching on the actual scene image by using the reference calibration parameters corresponding to the multiple camera modules, to obtain a second depth information image;
the distortion correction unit 93 is configured to obtain a relationship between a distance and a distortion correction coefficient, and perform distortion correction on the second depth information image according to the relationship between the distance and the distortion correction coefficient to obtain a depth information image after distortion correction;
the relationship between the distance and the distortion correction coefficient is obtained by calibrating the multi-camera module by using the calibration method for the multi-camera module provided by any one of the above embodiments. For a specific description of the relationship between the distance and the distortion correction coefficient obtained by calibrating the multiple camera modules, reference may be made to the description in the calibration method for multiple camera modules provided in any of the above embodiments, and details are not repeated here.
In a specific implementation, the distortion correction device 90 for the depth information image may correspond to a Chip having a distortion correction function in the distortion correction device 90 for the depth information image or a terminal of the distortion correction device 90 including the depth information image, such as a System-On-a-Chip (SOC), a baseband Chip, or the like; a chip module having a distortion correction function is included in either the distortion correction device 90 corresponding to the depth information image or a terminal of the distortion correction device 90 containing the depth information image; or to a chip module having a chip with a data processing function, or to the distortion correction apparatus 90 for a depth information image or to a terminal of the distortion correction apparatus 90 containing a depth information image.
In a specific implementation, for a specific working principle and a working flow of the distortion correction apparatus 90 for a depth information image, reference may be made to the distortion correction method for a depth information image, the calibration method for a multi-camera module, and the related descriptions in fig. 1 to 8 in the foregoing embodiments, and details are not repeated here.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the calibration method for a multi-camera module set provided in any one of the above embodiments of the present invention or to perform the steps of the distortion correction method for a depth information image provided in any one of the above embodiments.
The computer-readable storage medium may include non-volatile (non-volatile) or non-transitory (non-transitory) memory, and may also include optical disks, mechanical hard disks, solid state disks, and the like.
Specifically, in the embodiment of the present invention, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of illustration and not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlronous DRAM (SLDRAM), and Direct Memory bus RAM (DR).
The embodiment of the present invention further provides a terminal, which includes a memory and a processor, where the memory stores a computer program capable of running on the processor, and the processor executes the steps of the calibration method for multiple camera modules provided in any of the above embodiments or executes the steps of the distortion correction method for depth information images provided in any of the above embodiments when running the computer program. .
The memory is coupled to the processor, and the memory may be located within the terminal or external to the terminal. The memory and the processor may be connected by a communication bus.
The terminal may include, but is not limited to, a mobile phone, a computer, a tablet computer, and other terminal devices, and may also be a server, a cloud platform, and the like.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer program may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative; for example, the division of the unit is only a logic function division, and there may be another division manner in actual implementation; for example, various elements or components may be combined or may be integrated in another system or some features may be omitted, or not implemented. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit. For example, for each device or product applied to or integrated into a chip, each module/unit included in the device or product may be implemented by hardware such as a circuit, or at least a part of the module/unit may be implemented by a software program running on a processor integrated within the chip, and the rest (if any) part of the module/unit may be implemented by hardware such as a circuit; for each device and product applied to or integrated with the chip module, each module/unit included in the device and product may be implemented by hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components of the chip module, or at least part of the modules/units may be implemented by a software program running on a processor integrated inside the chip module, and the rest (if any) part of the modules/units may be implemented by hardware such as a circuit; for each device and product applied to or integrated in the terminal, each module/unit included in the device and product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program running on a processor integrated in the terminal, and the rest (if any) part of the modules/units may be implemented by using hardware such as a circuit.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more.
The descriptions of the first, second, third, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects of description, and do not represent any particular limitation on the number of devices in the embodiments of the present application, and do not constitute any limitation on the embodiments of the present application.
It should be noted that the sequence numbers of the steps in this embodiment do not represent a limitation on the execution sequence of the steps.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected by one skilled in the art without departing from the spirit and scope of the invention, as defined in the appended claims.

Claims (22)

1. A calibration method of a multi-camera module is characterized by comprising the following steps:
acquiring a planar image, wherein the planar image is obtained by shooting a planar object by a multi-camera module to be calibrated;
performing stereo correction and stereo matching on the planar image by using reference calibration parameters corresponding to the multiple camera modules to obtain a first depth information image, wherein the first depth information image comprises depth information values of all pixels, and the depth information values are used for representing the distances between the planar object and the multiple camera modules;
and establishing a relation between the distance and a distortion correction coefficient according to the distance between each pixel and a distortion center pixel in the first depth information image and the depth information value of each pixel, wherein the distortion correction coefficient is used for correcting the depth information value of the pixel, and the relation between the distance and the distortion correction coefficient is used for carrying out distortion correction on a second depth information image obtained subsequently by the multi-camera module.
2. The method for calibrating a multi-camera module according to claim 1, wherein the establishing a relationship between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel comprises:
performing region division on the first depth information image to obtain a plurality of regions;
and aiming at each region, establishing the relation between the distance corresponding to each region and the distortion correction coefficient according to the distance between each pixel in each region and the distortion center pixel and the depth information value of each pixel.
3. The method for calibrating a multi-camera module according to claim 2, wherein the establishing, for each region, a relationship between a distance corresponding to each region and a distortion correction coefficient according to a distance between each pixel in each region and the distortion center pixel and a depth information value of each pixel comprises:
calculating the maximum distance between the pixels in each region and the distortion center pixels for each region, and segmenting the maximum distance to obtain a plurality of distance intervals;
and fitting the distance between the pixel and the distortion center pixel and the depth information value of the pixel to each pixel of which the distance from the distortion center pixel belongs to each distance interval to obtain the relation between the distance corresponding to each area and the distortion correction coefficient.
4. The method for calibrating a multi-camera module set according to claim 3, wherein said segmenting said maximum distance comprises:
determining a distance division point according to a variation amount of the depth information value of the pixel in each region in a direction from the distortion center pixel to a pixel corresponding to the maximum distance, and dividing the maximum distance into a plurality of distance sections based on the distance division point.
5. The method for calibrating a multi-camera module according to claim 4, wherein the fitting of the distance between a pixel and the distortion center pixel and the depth information value of the pixel for each pixel whose distance from the distortion center pixel belongs to each distance interval comprises:
for a first distance interval with the change quantity of the depth information value not larger than a set threshold, performing linear fitting on the distance between each pixel and the distortion center pixel in the first distance interval and the depth information value of each pixel;
and for a second distance interval with the change quantity of the depth information value larger than a set threshold, performing polynomial fitting or spline fitting on the distance between each pixel and the distortion center pixel in the second distance interval and the depth information value of each pixel.
6. The method for calibrating a multi-camera module according to claim 5, wherein for a second distance interval in which the variation of the depth information value is greater than a set threshold, performing polynomial fitting or spline fitting on the distance between each pixel and the distortion center pixel and the depth information value of each pixel in the second distance interval comprises:
calculating the mean value of the depth information values of part or all of the pixels in the first distance interval;
calculating a quotient of the depth information value of each pixel belonging to the second distance interval and the mean value, and taking the calculated quotient as the error rate of the depth information value of each pixel;
and performing polynomial fitting or spline fitting on the distance between each pixel in the second distance interval and the distortion center pixel and the error rate of the depth information value of each pixel.
7. The method for calibrating a multi-camera module according to any one of claims 3 to 6, wherein the area division of the first depth information image to obtain a plurality of areas comprises at least one of the following area division modes:
taking the distortion central pixel as a center, and adopting one or more preset radiuses to perform region division on the first depth information image;
and dividing the first depth information image into four quadrants by taking the distortion central pixel as an origin and adopting a cross-shaped area dividing mode.
8. The method for calibrating a multi-camera module according to claim 1, wherein the establishing a relationship between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel comprises:
fitting to obtain a function of the distortion correction coefficient and the distance according to the distance between each pixel in the first depth information image and a distortion center pixel and the depth information value of each pixel;
equally dividing the maximum distance between the pixel in the first depth information image and the distortion center pixel into M parts to obtain M length distances, wherein M is a positive integer greater than or equal to 2;
and substituting the distances with the lengths of M into the function, respectively calculating distortion correction coefficients corresponding to the distances with the lengths of M, and establishing a relation between the distances and the distortion correction coefficients based on the distortion correction coefficients corresponding to the distances with the lengths of M.
9. The method for calibrating a multi-camera module according to claim 1, wherein the establishing a relationship between the distance and the distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel comprises:
obtaining a distortion correction coefficient of each pixel according to the distance between each pixel in the first depth information image and a distortion center pixel and the depth information value of each pixel;
and obtaining a mask image based on the distortion correction coefficient of each pixel, wherein the mask image comprises the distortion correction coefficient of each pixel in the first depth information image, and the mask image is used for representing the relation between the distance and the distortion correction coefficient.
10. The method for calibrating a multi-camera module set according to claim 1, further comprising:
after the first depth information image is obtained, preprocessing the first depth information image, wherein the preprocessing comprises at least one of the following steps:
carrying out image denoising treatment;
filtering;
and (5) filling holes.
11. The method for calibrating a multi-camera module set according to claim 1, further comprising:
before establishing a relation between a distance and a distortion correction coefficient according to the distance between each pixel and a distortion center pixel in the first depth information image and the depth information value of each pixel, judging whether the depth information value in the first depth information image meets a correction condition, wherein the correction condition comprises at least one of the following conditions: the deviation between the maximum depth information value and the minimum depth information value in the first depth information image is greater than a set deviation, and the number of pixels with depth information values greater than a set threshold in the first depth information image is greater than a set number;
and if the correction condition is met, establishing a relation between the distance and the distortion correction coefficient according to the distance between each pixel and the distortion center pixel in the first depth information image and the depth information value of each pixel.
12. The method for calibrating a multi-camera module set according to claim 1, wherein the planar image includes a first planar image and a second planar image, the first planar image is captured by a first camera, the second planar image is captured by a second camera, the multi-camera module set at least includes the first camera and the second camera, and the stereo-calibrating and stereo-matching the planar image using the reference calibration parameters corresponding to the multi-camera module set to obtain a first depth information image includes:
performing stereo correction on the first plane image and the second plane image by using the reference calibration parameters, so that polar lines of the first plane image are horizontally aligned with corresponding polar lines of the second plane image, and obtaining a first corrected image and a second corrected image after stereo correction;
performing stereo matching on the first correction image and the second correction image, and calculating a depth information value of each pixel according to the deviation of each pixel in the first correction image and the position of the pixel at the corresponding position in the second correction image;
the resulting first depth information image is derived based on a depth information value of each pixel.
13. A distortion correction method for a depth information image, comprising:
acquiring an actual scene image, wherein the actual scene image is obtained by shooting by adopting a calibrated multi-camera module;
performing stereo correction and stereo matching on the actual scene image by using reference calibration parameters corresponding to the multiple camera modules to obtain a second depth information image;
acquiring the relation between the distance and a distortion correction coefficient, and carrying out distortion correction on the second depth information image according to the relation between the distance and the distortion correction coefficient to obtain a depth information image after distortion correction; the relationship between the distance and the distortion correction coefficient is obtained by calibrating the multi-camera module according to the calibration method of the multi-camera module as claimed in any one of claims 1 to 12.
14. The method for distortion correction of a depth information image according to claim 13, wherein the distortion correction of the second depth information image according to the relationship between the distance and the distortion correction coefficient to obtain a distortion-corrected depth information image comprises:
and determining the distortion correction coefficient of each pixel in the second depth information image according to the distance between each pixel in the second depth information image and the distortion center pixel and the relation between the distance and the distortion correction coefficient, and performing distortion correction on the second depth information image by adopting the determined distortion correction coefficient of each pixel to obtain the depth information image after the distortion correction.
15. The method for distortion correction of a depth information image according to claim 14, wherein the determining distortion correction coefficients of pixels in the second depth information image according to the distances between the pixels in the second depth information image and the distortion center pixel and the relationship between the distances and the distortion correction coefficients, and performing distortion correction on the second depth information image by using the determined distortion correction coefficients of the pixels to obtain the distortion-corrected depth information image comprises:
traversing all pixels in the second depth information image, and determining the distortion correction coefficient of each pixel according to the region where each pixel is located and the relationship between the distance corresponding to the located region and the distortion correction coefficient;
calculating the quotient of the depth information value of each pixel and the corresponding distortion correction coefficient, and taking the calculated quotient as the depth information value after distortion correction of each pixel;
and obtaining the depth information image after distortion correction based on the depth information value after each pixel correction.
16. The method for correcting distortion of a depth information image according to claim 15, wherein the determining the distortion correction coefficient of each pixel according to the region where each pixel is located and the relationship between the distance corresponding to the located region and the distortion correction coefficient includes any one of:
if the distance between a pixel and the distortion center pixel is the same in the distances of M lengths, taking the distortion correction coefficient corresponding to the same length as the distortion correction coefficient corresponding to the pixel;
if the distance between a pixel and the distortion center pixel is between a first distance and a second distance, obtaining a distortion correction coefficient corresponding to the pixel by adopting a linear interpolation mode according to the distortion correction coefficient corresponding to the first distance and the distortion correction coefficient corresponding to the second distance, wherein the M lengths of the distances include the first distance and the second distance, the M lengths of the distances are obtained by averaging the maximum distances between the pixel in the first depth information image and the distortion center pixel into M parts, and M is a positive integer greater than or equal to 2.
17. The method for distortion correction of a depth information image according to claim 13, wherein when the relationship between the distance and the distortion correction coefficient is represented by a mask image, the distortion correction of the second depth information image according to the distance and the distortion correction coefficient to obtain a distortion-corrected depth information image comprises:
and fusing the second depth information image and the mask image to obtain a fused image, wherein the fused image is the corrected depth information image.
18. The distortion correction method for a depth information image according to claim 13, wherein the second depth information image is obtained by:
acquiring a first actual scene image and a second actual scene image which are subsequently shot by the multi-camera module, wherein the first actual scene image is shot by a first camera, and the second actual scene image is shot by a second camera;
performing stereo correction on the first actual scene image and the second actual scene image by using the reference calibration parameters, so that polar lines corresponding to the first actual scene image and the second actual scene image are horizontally aligned, and obtaining a first actual corrected image and a second actual corrected image after stereo correction;
and performing stereo matching on the first actual correction image and the second actual correction image, calculating to obtain a depth information value of each pixel according to the deviation of the position of each pixel in the first actual correction image and the position of the pixel at the corresponding position in the second actual correction image, and obtaining the second depth information image.
19. The utility model provides a calibration device of many cameras module which characterized in that includes:
the system comprises a plane image acquisition unit, a plane image acquisition unit and a plane image calibration unit, wherein the plane image acquisition unit is used for acquiring a plane image, and the plane image is obtained by shooting a plane object by a multi-camera module to be calibrated;
the first depth information image determining unit is used for performing stereo correction and stereo matching on the planar image by using reference calibration parameters corresponding to the multiple camera modules to obtain a first depth information image, wherein the first depth information image comprises depth information values of all pixels, and the depth information values are used for representing the distances between the planar object and the multiple camera modules;
and the calibration unit is used for establishing a relation between the distance and a distortion correction coefficient according to the distance between each pixel in the first depth information image and the distortion center pixel and the depth information value of each pixel, wherein the distortion correction coefficient is used for correcting the depth information value of the pixel, and the distance and the distortion correction coefficient are used for carrying out distortion correction on a second depth information image subsequently obtained by the multi-camera module.
20. A distortion correction apparatus for a depth information image, comprising:
the system comprises an actual scene image acquisition unit, a multi-camera module and a camera module, wherein the actual scene image acquisition unit is used for acquiring an actual scene image which is obtained by shooting by adopting a calibrated multi-camera module;
the second depth information image determining unit is used for performing stereo correction and stereo matching on the actual scene image by adopting the reference calibration parameters corresponding to the multiple camera modules to obtain a second depth information image;
the distortion correction unit is used for acquiring the relation between the distance and the distortion correction coefficient, and carrying out distortion correction on the second depth information image according to the relation between the distance and the distortion correction coefficient to obtain a depth information image after distortion correction;
the relationship between the distance and the distortion correction coefficient is obtained by calibrating the multi-camera module by using the calibration method of the multi-camera module as claimed in any one of claims 1 to 12.
21. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, performs the steps of the method for calibrating a multi-camera module set according to any one of claims 1 to 12, or performs the steps of the method for correcting distortion of a depth information image according to any one of claims 13 to 18.
22. A terminal comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, wherein the processor executes the computer program to perform the steps of the method for calibrating a multi-camera module according to any one of claims 1 to 12 or to perform the steps of the method for correcting distortion of a depth information image according to any one of claims 13 to 18.
CN202211130943.7A 2022-09-16 2022-09-16 Calibration method and device, distortion correction method and device, storage medium and terminal Pending CN115661258A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211130943.7A CN115661258A (en) 2022-09-16 2022-09-16 Calibration method and device, distortion correction method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211130943.7A CN115661258A (en) 2022-09-16 2022-09-16 Calibration method and device, distortion correction method and device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN115661258A true CN115661258A (en) 2023-01-31

Family

ID=84984050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211130943.7A Pending CN115661258A (en) 2022-09-16 2022-09-16 Calibration method and device, distortion correction method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN115661258A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117111046A (en) * 2023-10-25 2023-11-24 深圳市安思疆科技有限公司 Distortion correction method, system, device and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117111046A (en) * 2023-10-25 2023-11-24 深圳市安思疆科技有限公司 Distortion correction method, system, device and computer readable storage medium
CN117111046B (en) * 2023-10-25 2024-01-12 深圳市安思疆科技有限公司 Distortion correction method, system, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
Jeon et al. Accurate depth map estimation from a lenslet light field camera
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107346061B (en) System and method for parallax detection and correction in images captured using an array camera
CN106815869B (en) Optical center determining method and device of fisheye camera
CN111160232B (en) Front face reconstruction method, device and system
WO2022052582A1 (en) Image registration method and device, electronic apparatus, and storage medium
WO2019232793A1 (en) Two-camera calibration method, electronic device and computer-readable storage medium
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN112233189B (en) Multi-depth camera external parameter calibration method and device and storage medium
CN110136048B (en) Image registration method and system, storage medium and terminal
CN112257713A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP7156624B2 (en) Depth map filtering device, depth map filtering method and program
CN116958419A (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
CN115661258A (en) Calibration method and device, distortion correction method and device, storage medium and terminal
CN107256563B (en) Underwater three-dimensional reconstruction system and method based on difference liquid level image sequence
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN115578296A (en) Stereo video processing method
CN115456945A (en) Chip pin defect detection method, detection device and equipment
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN115131273A (en) Information processing method, ranging method and device
CN110728714B (en) Image processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination