CN113808220A - Calibration method and system of binocular camera, electronic equipment and storage medium - Google Patents

Calibration method and system of binocular camera, electronic equipment and storage medium Download PDF

Info

Publication number
CN113808220A
CN113808220A CN202111122839.9A CN202111122839A CN113808220A CN 113808220 A CN113808220 A CN 113808220A CN 202111122839 A CN202111122839 A CN 202111122839A CN 113808220 A CN113808220 A CN 113808220A
Authority
CN
China
Prior art keywords
camera
calibration
main
auxiliary
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111122839.9A
Other languages
Chinese (zh)
Inventor
高永基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wingtech Electronic Technology Co Ltd
Original Assignee
Shanghai Wingtech Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wingtech Electronic Technology Co Ltd filed Critical Shanghai Wingtech Electronic Technology Co Ltd
Priority to CN202111122839.9A priority Critical patent/CN113808220A/en
Publication of CN113808220A publication Critical patent/CN113808220A/en
Priority to PCT/CN2021/140186 priority patent/WO2023045147A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a calibration method, a calibration system, electronic equipment and a storage medium of a binocular camera, wherein the method comprises the following steps: taking the checkerboard as a calibration board; dividing a binocular camera into a main camera and an auxiliary camera, and respectively acquiring checkerboard pictures on a calibration plate by adopting the main camera and the auxiliary camera to obtain a main shot picture and an auxiliary shot picture; performing monocular calibration on a main camera according to the main shot picture to obtain an internal and external parameter matrix of the main camera and a distortion coefficient matrix of the main camera; performing monocular calibration on the auxiliary camera according to the auxiliary shot picture to obtain an internal and external parameter matrix of the auxiliary camera and a distortion coefficient matrix of the auxiliary camera; and then, respectively carrying out three-dimensional calibration on the main camera and the auxiliary camera to obtain a rotation matrix, a translation matrix, an intrinsic matrix and a basic matrix. According to the method, before the double-shot stereo calibration, the main camera and the auxiliary camera are respectively subjected to monocular calibration to obtain the internal and external parameters and the distortion parameters of each monocular camera, so that the obtained internal reference error is small, and the double-shot calibration result is relatively stable.

Description

Calibration method and system of binocular camera, electronic equipment and storage medium
Technical Field
The present disclosure relates generally to the field of computer vision technologies, and in particular, to a calibration method and system for a binocular camera, an electronic device, and a storage medium.
Background
In image measurement or machine vision application, the calibration of camera parameters is a very critical link, and the accuracy of the calibration result and the stability of the algorithm directly influence the accuracy of the result generated by the camera. Therefore, the camera calibration is a precondition for the subsequent work, and the improvement of the calibration precision is a key point of scientific research and production work.
Smart phones are increasingly adopting a scheme of combining a plurality of cameras, wherein a double-shooting function can be realized by matching any two cameras, so that the functions of improving image quality, blurring background, optically zooming, reconstructing three dimensions and the like are achieved. Obviously, the key technology of the multi-camera technical scheme of the smart phone is a double-camera technical scheme, and double-camera calibration is a key link of the double-camera technical scheme, so the importance of the double-camera calibration in the multi-camera technical scheme of the smart phone is more and more prominent.
The double-shot calibration means that in the image measurement process and machine vision application, in order to determine the correlation between the geometric position of a certain point on the surface of a space object in a three-dimensional space and the corresponding point in an image, a geometric model of camera imaging must be established, and the geometric model parameters are camera parameters (internal parameters, external parameters and distortion parameters). In most cases, these parameters must be obtained through experiments and calculations, and this process of solving the geometric model parameters is called camera calibration (or video camera calibration), and so-called double-shot calibration is a process of calibrating two cameras (video cameras).
The existing double-camera calibration technology directly performs three-dimensional calibration on a main camera and an auxiliary camera and directly acquires internal and external parameters of the main camera and the auxiliary camera, so that the obtained internal parameters have large error and unstable calibration result.
Disclosure of Invention
In view of the above-mentioned defects or shortcomings in the prior art, it is desirable to provide a calibration method and a calibration system for binocular cameras, in which before the dual-camera calibration, the primary and secondary cameras are respectively monocular calibrated, then the primary and secondary cameras are stereoscopically calibrated, the internal and external parameters and distortion parameters of each monocular camera are obtained through the monocular calibration, the obtained internal reference error is small, and the dual-camera calibration result is relatively stable.
In a first aspect, a calibration method for a binocular camera is provided, which includes the following steps:
s10: taking the checkerboard as a calibration board;
s30: dividing a binocular camera into a main camera and an auxiliary camera, respectively acquiring checkerboard pictures on a calibration plate by adopting the main camera and the auxiliary camera, and respectively obtaining a main shot picture and an auxiliary shot picture correspondingly;
s50: performing monocular calibration on a main camera according to the main shot picture to obtain an internal and external parameter matrix of the main camera and a distortion coefficient matrix of the main camera;
performing monocular calibration on the auxiliary camera according to the auxiliary shot picture to obtain an internal and external parameter matrix of the auxiliary camera and a distortion coefficient matrix of the auxiliary camera;
s70: and then respectively carrying out three-dimensional calibration on the main camera and the auxiliary camera to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E and a basic matrix F.
In a second aspect, a calibration system for a binocular camera is provided, including:
the calibration plate design module is configured for designing a checkerboard as a calibration plate;
the image acquisition module is configured for dividing the binocular camera into a main camera and an auxiliary camera, and adopting the main camera and the auxiliary camera to respectively acquire checkerboard pictures on the calibration plate and respectively obtain a main shot picture and an auxiliary shot picture correspondingly;
the monocular calibration module is configured and used for performing monocular calibration on the main camera according to the main shooting picture to obtain an internal and external parameter matrix of the main camera and a distortion coefficient matrix of the main camera; the auxiliary camera monocular calibration is carried out according to the auxiliary shot picture, and an internal and external parameter matrix of the auxiliary camera and a distortion coefficient matrix of the auxiliary camera are obtained;
and the three-dimensional calibration module is configured for performing three-dimensional calibration on the main camera and the auxiliary camera to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E and a basic matrix F.
In a third aspect, an electronic device is provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the calibration method for the binocular camera provided in any embodiment of the present application when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the calibration method for a binocular camera provided in any embodiment of the present application.
Compared with the prior art, the invention at least has the following beneficial technical effects:
1) before the double-shot stereo calibration, the main camera and the auxiliary camera are respectively subjected to monocular calibration to obtain the internal parameter, the external parameter and the distortion parameter of each monocular camera, the obtained internal parameter error is small, and the double-shot calibration result is relatively stable.
2) The chessboard marking board is improved, more than four chequers are adopted, the angle direction and the position posture of each chequer on the marking board are different, so that more than four chequers can be captured for double shooting and marking, the marking efficiency is improved, and the risk of marking failure is reduced.
3) The heterogeneous dual-shooting calibration is improved, checkerboard pictures on a calibration plate are collected by taking a camera with a large FOV as a reference, the pictures collected by the camera with the large FOV keep the preset pattern size and the checkerboard pictures are occupied as much as possible, the pictures collected by the camera with the small FOV cannot keep the preset pattern size but the checkerboard pictures are occupied as much as possible, a binocular calibration technical scheme, an algorithm flow and a core algorithm are redeveloped for the heterogeneous dual-shooting calibration, and the method is suitable for homogeneous, especially heterogeneous dual-shooting calibration.
4) Because the checkerboard picture on the calibration plate is collected by taking the camera with the large FOV as a reference, the checkerboard angular points in the collected picture are detected by adopting a growth-based checkerboard angular point detection algorithm, so that the accuracy and the efficiency of the checkerboard angular point detection are high.
5) The method and the device are used for single-view calibration of the main camera and the auxiliary camera based on the self-development double-shooting calibration APK and the algorithm, can be used for various types of double-shooting calibration, such as main shooting + depth, main shooting + wide angle, rear main + micro distance and the like, and are convenient to switch double-shooting calibration of different modules.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic diagram of a conventional three-dimensional reconstruction process;
fig. 2 is a schematic diagram of a dual-camera calibration process of a conventional smart phone;
fig. 3 is a schematic diagram of a dual-camera calibration process of a smart phone according to an embodiment of the present application;
FIG. 4 is a schematic diagram of four checkerboard calibration boards provided in the present embodiment;
FIG. 5 is a photograph of four checkerboard calibration plates provided in accordance with an embodiment of the present application;
FIG. 6 is an example of a heterogeneous dual shot captured picture with and without predefined pattern sizes; wherein, the picture (a) is a small FOV main shot picture which is acquired by keeping a preset pattern size for heterogeneous double shot; the picture (b) is a large FOV auxiliary shot picture which is acquired by keeping a preset pattern size for heterogeneous double shooting; the picture (c) is a small FOV main shot picture collected by heterogeneous double shot without keeping a predetermined pattern size; the picture (d) is a large FOV auxiliary shot picture collected by heterogeneous double shooting without keeping a predetermined pattern size;
FIG. 7 is a flowchart illustrating the step S50 in FIG. 3;
fig. 8 is a UI interface design diagram of a dual shot calibration APK provided in the embodiment of the present application;
fig. 9 is a picture of an actual effect of a UI interface of a dual-camera calibration APK provided in the embodiment of the present application;
FIG. 10 is a schematic diagram of a pinhole camera model provided in an embodiment of the present application;
fig. 11 is a schematic diagram illustrating transformation of four coordinate systems of a pinhole camera model according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of radial distortion provided by an embodiment of the present application; wherein, the diagram (a) is a radial undistorted schematic diagram; FIG. (b) is a schematic view of radial barrel distortion; FIG. (c) is a schematic view of radial pincushion distortion;
FIG. 13 is a schematic diagram of a radial distortion model provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of a tangential distortion model provided by an embodiment of the present application;
FIG. 15 is a diagram of a bi-camera model provided by an embodiment of the present application;
FIG. 16 is a geometric model diagram of antipole provided in an embodiment of the present application;
FIG. 17 is a schematic view of a bi-optic translation and rotation provided by an embodiment of the present application;
fig. 18 is an exemplary flowchart illustrating another preferred embodiment of a bi-camera calibration method for a smart phone according to the present application;
FIG. 19 is a schematic diagram of an ideal dual-camera stereo device according to an embodiment of the present application;
fig. 20 is a schematic view of the application after stereo correction according to an embodiment;
fig. 21 is a structural diagram of a calibration system of a binocular camera provided in an embodiment of the present application;
fig. 22 is a specific structural diagram of a monocular calibration module provided in an embodiment of the present application;
fig. 23 is an exemplary block diagram of another preferred embodiment of a calibration system of a binocular camera provided in an embodiment of the present application;
fig. 24 is a schematic internal structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Taking three-dimensional reconstruction of a smart phone as an example, first, a binocular camera is used for spatial positioning, so that double-shot calibration is a core part of the whole project. As shown in fig. 1, the binocular vision mainly includes five parts, i.e., camera calibration, image distortion correction, camera correction, image matching, and three-dimensional reconstruction. The goal of bi-shooting calibration as a core part of the overall project is two:
first, to recover the real world position of the object imaged by the camera, it is necessary to know how the world object is transformed into the image plane of the mobile phone. That is, one of the purposes of bi-shooting calibration is to solve the internal and external parameter matrices to clarify the transformation relationship. Because the calibration is carried out on the double shots of the mobile phone, the respective internal parameter matrix of the double shots of the mobile phone and the external parameter matrix between the double shots of the mobile phone can be solved.
Secondly, the perspective projection of the camera has a great problem of distortion, so that another purpose of the double shot calibration is to solve distortion parameters and then use the distortion parameters for image correction.
Fig. 2 shows a typical bi-camera calibration process of a conventional smart phone, which has the following technical disadvantages:
(1) only one checkerboard calibration plate is prepared, only one checkerboard is snapshotted each time for binocular calibration, calibration efficiency is low, and calibration failure risk is large.
(2) The binocular calibration directly obtains main and auxiliary shooting internal parameters, the error is large, and the calibration result is unstable.
(3) Only isomorphic double-shot calibration (binocular calibration with the same focal length and the same resolution ratio) can be performed, and heterogeneous double-shot calibration (binocular calibration with different focal lengths and different resolution ratios) cannot be performed.
(4) The use of third party's dual camera calibration APK and algorithm, development, optimization and application are limited.
Referring to fig. 3, an exemplary flow chart of a calibration method of a binocular camera provided according to an embodiment of the present application is shown.
As shown in fig. 3, in this embodiment, the calibration method for a binocular camera provided by the present invention includes:
s10: the checkerboard is used as a calibration board.
S30: dividing a binocular camera into a main camera and an auxiliary camera, respectively acquiring checkerboard pictures on a calibration plate by adopting the main camera and the auxiliary camera, and respectively obtaining a main shot picture and an auxiliary shot picture correspondingly.
S50: and performing monocular calibration on the main camera according to the main shot picture to obtain an internal and external parameter matrix of the main camera and a distortion coefficient matrix of the main camera.
And performing monocular calibration on the auxiliary camera according to the auxiliary shot picture to obtain an internal parameter matrix and an external parameter matrix of the auxiliary camera and a distortion coefficient matrix of the auxiliary camera.
S70: and then respectively carrying out three-dimensional calibration on the main camera and the auxiliary camera to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E and a basic matrix F.
The common double-shooting calibration technology directly obtains main and auxiliary shooting internal and external parameters, so that the obtained internal parameters have large errors and unstable calibration results. The embodiment of the application improves the binocular calibration algorithm flow, and before the double-shot stereo calibration, the main camera and the auxiliary camera are respectively subjected to monocular calibration to obtain the internal parameter, the external parameter and the distortion parameter of each monocular camera, so that the obtained internal parameter error is small, and the double-shot calibration result is relatively stable.
In step S10, the calibration board includes at least four chequers, and the angular direction and the position posture of each chequer in the calibration board are different.
Specifically, in step S10, when the calibration board includes four chequers, the four chequers are respectively located at the upper left corner, the upper right corner, the lower left corner and the lower right corner of the calibration board; the checkerboard at the upper left corner keeps horizontal and vertical, the checkerboard at the upper right corner is used as a reference, the checkerboard at the upper right corner rotates to the right by a first preset angle along the self central axis, the checkerboard at the lower left corner rotates to the left by a second preset angle along the self central axis, and the checkerboard at the lower right corner rotates to the upward by a third preset angle along the self central axis.
In a general double-shooting calibration technology, only one checkerboard calibration plate is prepared, checkerboard calibration plate pictures in different angular directions and different positions and postures are captured for multiple times, and then monocular or binocular calibration is performed. And only one checkerboard is snapshotted to carry out monocular or binocular calibration each time, the calibration efficiency is low, and the risk of calibration failure is large.
In step S10, the present application improves the checkerboard calibration board, and selects four checkerboard calibration boards for calibration in the double shot calibration of this study, where each checkerboard in the four checkerboard calibration boards has different angular directions and different positions and postures, as shown in fig. 4-5. Fig. 4 is a schematic diagram of four chequered boards as calibration boards, and fig. 5 shows a photograph of four chequered boards as calibration boards. In fig. 4-5, the upper left checkerboard is kept horizontal and vertical, and the placement of the remaining three checkerboards with the upper left checkerboard as reference are: the right upper corner checkerboard rotates 30 degrees rightwards along the self central axis, the left lower corner checkerboard rotates 30 degrees leftwards along the self central axis, the right lower corner checkerboard rotates 30 degrees upwards along the self central axis, and the distance between every two checkerboards is 1-3 times of the side length of the black and white squares. The specification and the size of each checkerboard are as follows: the pattern size is 14x19 (effective corner point is 13x 18); the side length of the black and white square is 15 mm.
Four chessboard grids calibration plates are adopted, so that four chessboard grids can be snapshotted at one time for double shooting calibration, the calibration efficiency is greatly improved, and the risk of calibration failure is reduced.
It should be noted that, in the present application, four checkerboard calibration plates are taken as an example for description, and in practical applications, six, eight or more checkerboard calibration plates may also be selected for calibration. In addition, the standard black and white checkerboard of 14x19 is adopted for calibration, and calibration boards with checkerboards of different shapes and styles can also be adopted for calibration. The side length of the black and white square grids is 15mm, and black and white square grids with different side lengths can also be adopted. And the specific rotating direction and the rotating angle of each checkerboard on the calibration plate can be adjusted according to actual requirements.
In step S30, when the angles of view of the binocular cameras are different, dividing a camera with a large angle of view among the binocular cameras into a sub-camera and a camera with a small angle of view into a main camera; and acquiring checkerboard pictures on the calibration board by taking the auxiliary camera as a reference, taking the checkerboard pictures on the calibration board acquired by the auxiliary camera as auxiliary shooting pictures, and taking the checkerboard pictures on the calibration board acquired by the main camera as main shooting pictures.
The common double-camera calibration technology can only carry out isomorphic double-camera calibration (binocular calibration with the same focal length and the same resolution), so that an open-source binocular calibration algorithm tool set is utilized without redeveloping a binocular calibration core algorithm. However, the conventional bi-camera calibration technology cannot perform heterogeneous bi-camera calibration (binocular calibration with different focal lengths and different resolutions), and most open source binocular calibration algorithm toolsets do not support heterogeneous bi-camera calibration.
In step S30, the present application improves the heterogeneous dual-camera calibration, and the dual-camera calibration scheme of the present application redevelops the technical scheme, the algorithm flow, and the core algorithm of the binocular calibration for the heterogeneous dual-camera calibration, and is suitable for the homogeneous, especially heterogeneous dual-camera calibration.
In the conventional double-shot calibration technology, as for isomorphic double-shot calibration, checkerboard pictures acquired by a main shot and a secondary shot keep a predetermined pattern size (so that a conventional checkerboard corner detection algorithm can detect corners). And since the focal length and the resolution are the same, the checkerboard can occupy the picture as full as possible in the main and auxiliary pictures.
As shown in fig. 6, in the heterogeneous dual-camera calibration, the main and sub-camera focal lengths and resolutions are different, and only images with a small field angle FOV (large focal length and small resolution) as a reference can be acquired to maintain a predetermined pattern size (fig. 6(a) and 6(b) are illustrated). But the large FOV (small focal length large resolution) so acquired is less accurate for the calculated intrinsic parameters due to the checkerboard not filling the picture around and around the edges (fig. 6(b) example).
The bi-camera calibration scheme of the present application discards maintaining a predetermined pattern size for heterogeneous bi-camera calibration, but acquires pictures with reference to a camera with a large FOV (small focal length and large resolution), such that the pictures acquired by the camera with the large FOV maintain the predetermined pattern size and the checkerboard is as full as possible, and the pictures acquired by the camera with the small FOV cannot maintain the predetermined pattern size but the checkerboard is as full as possible (fig. 6(c) and fig. 6(d) illustrate examples, and in fact, the size FOV of such a scheme may not maintain the predetermined pattern size and the checkerboard may be as full as possible). Naturally, the traditional double-shooting calibration picture acquisition method is changed, and related algorithms are also changed correspondingly naturally, namely a growth-based checkerboard corner detection algorithm is adopted in a monocular calibration link of the heterogeneous double-shooting calibration to ensure the accuracy and the efficiency of checkerboard corner detection.
Specifically, referring to fig. 7, step S50 includes the following sub-steps:
s51: and (3) respectively detecting the checkerboard angular points and the checkerboard angular points in the collected main shot picture and the auxiliary shot picture by using a growth-based checkerboard angular point detection algorithm.
S52: and reordering the detected checkerboard and checkerboard corner points in the main shot image and the auxiliary shot image respectively, and correspondingly obtaining the checkerboard and checkerboard corner points of the preset pattern sequence of the main shot image and the checkerboard corner points of the preset pattern sequence of the auxiliary shot image.
S53: and regularly aligning the checkerboard and checkerboard corner points of the preset pattern sequence of the main shot picture, and then calculating the internal and external parameters and the distortion coefficient of the main camera to obtain a main shot internal and external parameter matrix and a main shot distortion coefficient matrix.
And calculating the inner and outer parameters and the distortion coefficient of the auxiliary camera to obtain an auxiliary shooting inner and outer parameter matrix and a main shooting distortion coefficient matrix.
Specifically, in the substep S51, the growth-based checkerboard corner detection algorithm mainly includes three steps: 1) positioning the positions of the angular points of the checkerboard; 2) the sub-pixel level corner points and the directions are fine; 3) optimizing energy function and growing a checkerboard. In particular, a checkerboard corner detection algorithm based on growth reference paper (Geiger a, Moosmann F,
Figure BDA0003277630210000091
et al.Automatic camera and range sensor calibration using a single shot[C]// Robotics and Automation (ICRA),2012IEEE International Conference on. IEEE,2012: 3936-.
It should be noted that, in the prior art, a library function of OpenCV is generally used to detect a checkerboard corner, but it cannot detect a checkerboard corner without determining the checkerboard pattern size and has limited efficiency and accuracy for detecting the corner. The growth-based checkerboard corner detection algorithm provided by the embodiment of the application can detect the checkerboard corners with uncertain checkerboard pattern sizes with high detection efficiency, but the detection speed is low, and the detected checkerboard corners are not a predetermined sequence.
Specifically, in sub-step S52, the checkerboard and the checkerboard corner points in the captured picture are reordered from left to right and from top to bottom. Although each checkerboard and the checkerboard corner sequence (usually an indeterminate sequence) are found by the growth-based checkerboard corner detection method, the corner sequences in the checkerboard are not arranged according to a predetermined sequence, and the checkerboard is not arranged according to the predetermined sequence, so that each checkerboard and the corresponding checkerboard corner thereof need to be reordered.
Specifically, in the substep S53, the application is based on developing a dual-camera calibration APK and an algorithm to perform monocular calibration of the main camera and the auxiliary camera; the APK is an Android Application Package (APK), which is an application package file format used by an Android operating system and used for distributing and installing mobile applications and middleware. An algorithm is an accurate and complete description of a problem solving scheme, is a series of clear instructions for solving problems, and represents a strategy mechanism for describing the problems by using a systematic method. The specific development of the double-shot calibration APK and the algorithm is as follows:
1) developing dual camera calibration APK
The common double-shooting calibration technology uses the APK and the algorithm of the third party double-shooting calibration, and the development, the optimization and the application are limited. In the application, the double-shooting calibration scheme adopts a self-developed double-shooting calibration APK and an algorithm, so that great flexibility is provided in development, optimization and application (for example, double-shooting calibration of different modules can be conveniently switched, and switching from 'main shooting + depth' to 'main shooting + wide-angle' double-shooting calibration) and the application range can be optimized and expanded to the greatest extent. In principle, the method can also be used for other types of double-shot calibration (such as rear main + wide angle, rear main + macro, etc.).
The research is mainly used for double-shooting calibration of a rear main shooting camera module and a depth camera module of a certain smart phone. As shown in fig. 8, a User Interface (UI Interface for short) of the proactive calibration APK is first designed according to the proactive calibration requirement. The picture of the actual effect of the UI interface of the double shot calibration APK is shown in fig. 9. In fig. 8-9, the whole interface is defaulted to a landscape display, which mainly has four parts:
1. a message display text box, and when the START (START) button is not pressed, a default message "Dual Camera call" is displayed; and when the START button is pressed, displaying a returned message of successful calibration or failed calibration according to a calculation result of the background double-shot calibration algorithm. Dual Camera Calibration is a Dual Camera Calibration.
2. And the main and auxiliary preview interfaces are mainly used for displaying the preview interface of the main shooting chessboard format scaling board and the preview interface of the auxiliary shooting chessboard format scaling board, and are always in a foreground operation state during the operation of the whole APK.
3. And the START photographing button can capture the current main and auxiliary photographing data and execute a double-photographing calibration algorithm in the background after the START photographing button is pressed down until a message display text box displays a message of successful or failed calibration.
4. The label text box always displays the default text label during the whole APK running period, and as the three-line labels "DualcaMcalb V1.0.1", "ALG: 1.0.1" and "Z00667 AA 2" in the scheme.
And secondly, developing a double-shooting calibration APK frame and an algorithm core code according to the UI interface design of the double-shooting calibration APK. The whole APK main core code is mainly divided into two parts: a java layer and a cpp layer. The core code related to the double-shooting calibration APK framework is mainly realized on a java layer, and the core code related to the double-shooting calibration APK framework is mainly realized on the java layer: calling a main and auxiliary preview interface and displaying a main and auxiliary shooting preview picture in real time; pressing a START button, capturing main and auxiliary shooting data, storing the shooting data as a picture, and calling a cpp layer double-shooting calibration algorithm; and after the cpp layer double-shot calibration algorithm is executed, displaying the calibration result in a message text box. And a jni (java native interface) interface for transmitting and acquiring data in a key value mode is developed according to a bundle mechanism in the data interaction between the java layer and the cpp layer.
2) Developing a dual camera calibration algorithm
The core code realized by the double-shot calibration algorithm is mainly in the cpp layer, which mainly explains the pinhole camera model and the image correction technology, and other related technologies of the double-shot calibration core algorithm are specifically developed in the introduction of the subsequent double-shot calibration process.
a. Pinhole camera model
The process of shooting by the smart phone is actually an optical imaging process, wherein the pinhole camera model is the model which is most adopted by camera imaging. As shown in fig. 10-11, the imaging process of the camera in the pinhole camera model involves four coordinate systems: world coordinate system, camera coordinate system, image coordinate system, pixel coordinate system, and a transformation of these four coordinate systems.
World coordinate system (x)w,yw,zw): a reference frame of target object positions. Except for infinity, world coordinates can be freely placed according to the convenience of operation, and the unit is a length unit such as mm. The world coordinate system has three main uses in binocular vision: determining the position of a calibration object during calibration; as a binocular vision system reference system, giving the relation between two cameras and a world coordinate system so as to obtain the relative relation between the cameras; and storing the three-dimensional coordinates of the reconstructed object as a container for reconstructing the three-dimensional coordinates. The world coordinate system is the first station to incorporate the in-view object into the operation.
Camera coordinate system (X)c,Yc,Zc): the camera stands on the coordinate system of the object measured in its own angle. The origin of the camera coordinate system is on the optical center of the camera and the z-axis is parallel to the optical axis of the camera. The method is characterized in that the bridgehead castle is in contact with a shot object, and the object under a world coordinate system needs to be firstly subjected to rigid body change and then is transferred to a camera coordinate system and then is in relation with an image coordinate system. It is a link in the relationship between the image coordinates and the world coordinates in units of length such as mm.
And an image coordinate system (x, y) which takes the center of the CMOS image plane as a coordinate origin and is introduced for describing the projection transmission relation of the object from the camera coordinate system to the image coordinate system in the imaging process, so that the coordinates in the pixel coordinate system can be further conveniently obtained. The image coordinate system is the representation of the location of a pixel in an image in physical units (e.g., millimeters).
Pixel coordinate system (u, v): the vertex at the upper left corner of the CMOS image plane is used as the origin, and the vertex is introduced for describing the coordinates of the image point (photo) on the digital image after the object is imaged, and is the coordinate system where the information really read from the camera is located. The pixel coordinate system is an image coordinate system in units of pixels.
The transformation relation of four coordinate systems of the pinhole camera model is shown as formula (1):
Figure BDA0003277630210000111
order internal reference matrix
Figure BDA0003277630210000121
External reference matrix
Figure BDA0003277630210000122
The internal and external reference matrices P are:
Figure BDA0003277630210000123
then equation (1) is further simplified to:
Figure BDA0003277630210000124
wherein, in the formulae (1) to (3), (x)w,yw,zw) The coordinate of a certain point p in the world coordinate system; (u, v) are coordinates in the pixel coordinate system corresponding to the point p; zc(u, v, 1) is the coordinate in the camera coordinate system corresponding to point p; (u)0,v0) Is the optical center (principal point) of the camera; f is the focal length of the camera in mm; (k)u,kv) Represents the amount of pixels per millimeter in the (u, v) direction; r is a rotation matrix, and R is a rotation matrix,
Figure BDA0003277630210000125
r00~r22is an element in the rotation matrix R; t is a translation matrix, and T is a translation matrix,
Figure BDA0003277630210000126
Tx,Ty,Tzthe translation amounts in the x, y and z directions are respectively.
The conversion relationship from the world coordinate system to the pixel coordinate system (without taking distortion into account) is as shown in the above equation (3). Wherein P represents a projection matrix, which can be seen from equation (2) and is composed of an internal reference matrix M and an external reference matrix
Figure BDA0003277630210000127
Are obtained by multiplication.
b. Picture rectification
We speak of perspective transformation when transforming the camera coordinate system into the image coordinate system. When a camera takes a picture, a real object is projected onto an image plane through a lens, but the lens introduces distortion due to manufacturing precision and assembly process deviation, so that the original image is distorted. Therefore we need to consider the problem of imaging distortion. The distortion of the lens is mainly divided into radial distortion and tangential distortion, thin lens distortion and the like, but the radial distortion and the tangential distortion are not significantly influenced, so that only the radial distortion and the tangential distortion are considered.
(a) Radial distortion
Radial distortion, as the name suggests, is distortion distributed along the radius of the lens, which is generated because light rays are more curved away from the center of the lens than near the center, and is more pronounced in a common inexpensive lens, and mainly includes both barrel distortion and pincushion distortion. Fig. 12 is a schematic diagram of no distortion, barrel distortion and pincushion distortion, respectively.
The distortion at the center of the image plane is 0, and the distortion becomes more serious moving to the edge along the radius direction of the lens. The mathematical model of distortion can be described by the first few terms of the Taylor series expansion around the principal point (principal point), usually using the first two terms, k1And k2For a lens with large distortionFor example, a fisheye lens, a third term k may be added3The fourth term k4Describing that a point on the imager is according to its distribution position in the radial direction, equations (4) - (5) are adjusted as:
x0=x(1+k1r2+k2r4+k3r6+k4r8) Formula (4)
y0=y(1+k1r2+k2r4+k3r6+k4r8) Formula (5)
Wherein, (x, y) is the coordinate of the image coordinate system before radial or tangential correction, (x)0,y0) The coordinate of the image coordinate system after radial or tangential correction; radius of
Figure BDA0003277630210000131
k1~k4Is a radial distortion parameter.
Fig. 13 is a schematic diagram showing the displacement of the point position after radial distortion of the lens at different distances from the optical center, and it can be seen that the farther the distance from the optical center, the larger the radial displacement, the larger the distortion, and the near the optical center, there is almost no displacement.
(b) Tangential distortion
The tangential distortion is caused by the fact that the lens itself is not parallel to the camera sensor plane (image plane) or image plane, which is often caused by mounting deviations of the lens to the lens module. The distortion model may use two additional parameters p1And p2As described in formulas (6) to (7):
x0=x+[2p1y+p2(r2+2x2)]formula (6)
y0=y+[2p2x+p1(r2+2y2)]Formula (7)
Wherein p is1,p2Is a tangential distortion parameter.
FIG. 14 shows a tangential distortion diagram of a lens with a general distortion shift that is symmetrical about a line connecting the lower left-upper right corner, illustrating that the lens has a rotation angle perpendicular to this direction.
c. Monocular calibration of main and auxiliary cameras
Fig. 15 is a dual-camera model provided in the present application, as shown in fig. 15 (Translation is shown in the figure), based on the aforementioned pinhole camera model, there are:
Figure BDA0003277630210000141
Figure BDA0003277630210000142
wherein (u)1,v1) Is the coordinate in the left pixel coordinate system corresponding to point p, (u)2,v2) Is the coordinate in the right pixel coordinate system corresponding to the point p; zc1(u1,v11) coordinates in the left camera coordinate system corresponding to point p, Zc2(u2,v2And 1) is the coordinate in the right camera coordinate system corresponding to the point p;
Figure BDA0003277630210000143
projecting a matrix for the left camera;
Figure BDA0003277630210000144
the matrix is projected for the right camera.
By combining the aforementioned image distortion correction technology, the internal and external parameters and distortion parameters of the main and auxiliary images can be obtained.
Further, after the sub-step S53, the method further includes a sub-step S54:
and (4) calculating the re-projection corner points of the checkerboard corner points according to the internal and external parameters and the distortion coefficients of the main camera calculated in the substep S53, and then calculating the monocular calibration error of the main camera according to the actual three-dimensional corner point coordinates and the re-projection corner point coordinates.
And (4) calculating the re-projection corner points of the checkerboard corner points according to the internal and external parameters and the distortion coefficients of the auxiliary camera calculated in the substep S53, and then calculating the monocular calibration error of the auxiliary camera according to the actual three-dimensional corner point coordinates and the re-projection corner point coordinates.
Specifically, calculating the single-target calibration error of the main camera and the auxiliary camera has two functions: firstly, the calibration is used as a measurement standard for determining whether the calibration is accurate or not; and secondly, the calibration accuracy can be used as a reference standard in the debugging process.
Specifically, in step S70, the main-sub stereoscopic calibration requires calibration parameters: a main and sub-photographic intrinsic parameter matrix, a main and sub-photographic distortion coefficient matrix, a rotation matrix R, a translation matrix T, an eigen matrix E, and a basis matrix F (the main and sub-photographic intrinsic parameter matrix and the main and sub-photographic distortion coefficient matrix have been obtained in the previous single target).
The main difference between the calibration of a binocular camera and the calibration of a monocular camera is as follows: the binocular camera also needs to calibrate a relative relationship between the coordinate systems of the left camera and the right camera, that is, a rotation matrix R, a translation matrix T, an eigen matrix E and a fundamental matrix F need to be obtained, which is specifically as follows:
1) rotation matrix R and translation matrix T
As shown in fig. 15, the relative relationship between the left and right camera coordinate systems is described by using a rotation matrix R and a translation matrix T, specifically: a world coordinate system is established on the left camera.
Suppose there is a point P in space with coordinates P in the world coordinate systemw(xw,yw,zw) Its coordinates in the left and right camera coordinate systems can be expressed as:
wherein P islAnd PrThe following relationships are also provided:
Figure BDA0003277630210000151
by combining the above formula, it can be deduced that:
Figure BDA0003277630210000152
wherein, PlIs a pointP coordinates in the left camera coordinate System, PrIs the coordinate of point P under the coordinate system of the right camera; rl、TlRespectively obtaining a rotation matrix and a translation matrix of the left camera relative to a calibration object through monocular calibration; rr、TrThe rotation matrix and the translation matrix of the right camera relative to the calibration object are obtained through monocular calibration; the superscript "-1" represents the inverse of the matrix; the superscript "T" denotes the transpose of the matrix.
The left camera and the right camera respectively carry out monocular calibration, and then R can be respectively measuredl、Tl、Rr、Tr. The rotation matrix R and the translation matrix T between the left and right cameras can be obtained by substituting the above equation (9).
Note that in the above derivation
Figure BDA0003277630210000153
This is because the rotation matrix R, Rl、RrAre all unitary orthogonal matrices, the inverse (inverse) of an orthogonal matrix is equal to the transpose (transpose) of the orthogonal matrix.
The monocular camera needs calibrated parameters, both eyes need to be calibrated, and the binocular camera has more calibrated parameters than the monocular camera: the rotation matrix R and the translation matrix T are mainly parameters describing the relative position of the two cameras, which are very useful in stereo correction and epipolar geometry.
2) Eigen matrix E and basis matrix F
a. Geometric of antipole
Solving for the eigen matrix E and the basis matrix F is not to say the epipolar geometry first.
As shown in fig. 16, an intersection of a straight line connecting the projection center and the projection plane (projected plane) is called a pole (epipole), and a dotted line connecting the projection point and the pole is called an epipolar line (epipolar line). The two polar lines are obviously coplanar and the plane in which they lie is called the polar plane (epipolar plane).
Based on geometric deductive inferences (here it is obvious, so the deduction is omitted), we conclude that:
each three-dimensional point in the camera view (point P in the figure) is contained in a polar plane that intersects each image. The line resulting from the intersection of both the camera view and the polar plane is the epipolar line.
Given a feature point on one image, a matching point in the other image must lie on the corresponding epipolar line, a constraint referred to as the "epipolar constraint".
Epipolar constraint means that once we know the epipolar geometry between the stereo experimental devices, the two-dimensional search for matching features between the two images can be translated into a one-dimensional search along the epipolar lines (when applying the triangulation principle). This not only saves significant computational costs, but also helps us to eliminate many points that may produce false matches.
b. Solving the eigen matrix E and the basis matrix F
As shown in fig. 17, the eigenmatrix E contains information about the Translation (Translation) and Rotation (Rotation) of the two cameras in physical space. In addition to the same information, the basis matrix F also contains the intrinsic parameters of the two cameras, which can be related in a pixel coordinate system.
The following discusses the formula derivation of the eigen and base matrices. The specific idea is to relate all the correlations to the eigen matrix E and the basis matrix F with known polar planes.
According to FIG. 16, the left camera coordinate system is used as the reference system, and the corresponding two points on the left and right projection planes
Figure BDA0003277630210000161
And translation vector
Figure BDA0003277630210000162
The rotation matrix R has:
Figure BDA0003277630210000163
Figure BDA0003277630210000164
wherein the content of the first and second substances,
Figure BDA0003277630210000165
the vector form of the coordinates of left and right corresponding projection points of a certain point P in space on the projection planes of left and right cameras respectively, R is a rotation matrix from the left projection point to the corresponding right projection point, and T is a translation matrix from the left projection point to the corresponding right projection point; the upper right-hand corner of the matrix "-1" represents the inverse of the corresponding matrix, and the upper right-hand corner of the matrix "T" represents the transpose of the corresponding matrix.
Setting a normal vector corresponding to a certain polar plane
Figure BDA0003277630210000166
Is a vector corresponding to any point on the polar surface,
Figure BDA0003277630210000167
the vector corresponding to the fixed point on the polar surface is as follows according to the normal vector vertical to the corresponding plane:
Figure BDA0003277630210000168
the normal vector can be constructed in a cross-product (cross-product yields a perpendicular vector), and equation (14) above can be rewritten as follows:
Figure BDA0003277630210000169
equation (15) can be transformed according to equation (13) as follows:
Figure BDA0003277630210000171
converting the cross product into matrix multiplication knowledge according to line generation cross product, and writing the cross product into a matrix multiplication form:
Figure BDA0003277630210000172
wherein S is a transformation form of the translation matrix T.
An important conclusion is reached by substituting this formula (17) condition back to formula (16):
Figure BDA0003277630210000173
the dot product R · S is the final conclusion of the definition of the eigenmatrix E (E ═ R · S):
Figure BDA0003277630210000174
in practice, what is needed is an observed point on the projection plane, which can be represented by the projection formula:
Figure BDA0003277630210000175
and
Figure BDA0003277630210000176
the final conclusion of equation (20) may become the actual use conclusion:
Figure BDA0003277630210000177
wherein the content of the first and second substances,
Figure BDA0003277630210000178
respectively in the form of vectors of coordinates of projection points corresponding to the left and right image coordinate systems, fl、frIs the focal length of the left and right cameras, zl、zrThe values of the z component of the projection point corresponding to the left camera coordinate system and the right camera coordinate system are respectively.
It appears that the left imaged point can be mapped to the other side by the eigen-matrix E, but the eigen-matrix E is a rank deficient matrix (a 3x3 matrix with rank 2) so that only one point can be mapped to a straight line.
The eigenmatrix E contains all geometrical information about the two cameras, but does not include the internal parameters of the cameras. Vectors in the above derivation
Figure BDA0003277630210000179
Only points in the geometric sense are associated with the pixel points by:
Figure BDA00032776302100001710
wherein the content of the first and second substances,
Figure BDA00032776302100001711
in the form of a vector of coordinates of the image coordinate system,
Figure BDA00032776302100001712
is composed of
Figure BDA00032776302100001713
And M is an internal reference matrix in a vector form of the corresponding pixel coordinate system coordinate.
By substituting the main and sub photographing formula corresponding to formula (22) into formula (21), the following can be obtained:
Figure BDA0003277630210000181
the middle part of equation (23) is the basis matrix F:
Figure BDA0003277630210000182
in actual use, the formula containing the basis matrix F is:
Figure BDA0003277630210000183
wherein the content of the first and second substances,
Figure BDA0003277630210000184
respectively in the form of vectors of projection point coordinates corresponding to the left and right pixel coordinate systems; ml、MrRespectively, the internal reference matrix of the left camera and the internal reference matrix of the right camera.
The fundamental matrix F and the eigenmatrix E differ in that the fundamental matrix F operates in image pixel coordinates and the eigenmatrix E operates in a physical coordinate system. Like the eigen matrix E, the base matrix F is also a 3x3 matrix of rank 2.
Specifically, referring to fig. 18, step S90 is further included after step S70, and the Bouguet algorithm is used to perform stereo correction on the primary and secondary cameras.
The main and auxiliary photographic internal parameter matrix, the main and auxiliary photographic distortion matrix, the external parameter matrix, the intrinsic matrix and the basic matrix obtained by monocular calibration and stereo calibration are used for determining the correlation between the geometric position of a certain point on the surface of a space object in a three-dimensional space and the corresponding point in an image. In the link, an approximately ideal three-dimensional device model is obtained through three-dimensional correction on the basis of the previous step, and parallax and depth information is calculated through triangulation. In the following, triangulation and stereocorrection will be described.
1) Triangulation
Fig. 19 shows an ideal dual-camera stereo device, the image is not distorted, the image planes are coplanar, the optical axes are parallel, the focal distance is the same, and the principal point is calibrated to the position. Furthermore, it is assumed that the cameras are arranged in parallel in the forward direction, i.e. each pixel line is precisely aligned with the pixel line in the other camera and a point P can be found in the physical world, with the imaged points in the left and right images respectively.
Then, the depth Z can be obtained by the similarity of similar triangles:
Figure BDA0003277630210000185
wherein x isl、xrFor a certain point p in space, the abscissa of the projected point is respectively corresponded to in the left and right image coordinate systems, and d is equal to xl-xrIs parallax, B isThe baseline (distance from the origin of the left and right image coordinate systems), f is the focal length, and Z is the depth.
2) Stereo correction
As shown in fig. 15, there is no ideal parallel alignment of front lines in the actual two-shot device, so the objective of stereo rectification is: the image planes of the two cameras are remapped so that they lie in exactly the same plane, the image lines are aligned exactly to the forward parallel alignment. This results in an approximately ideal stereo setup (as shown in fig. 20) for triangulation to calculate parallax and depth information, where Principal Ray is the Principal Ray.
Bouguet algorithm
There are two common stereo correction algorithms: (1) the Hartley algorithm only uses the basic matrix to generate non-calibrated stereoscopic vision; (2) the Bouguet algorithm uses the rotation and translation parameters in two calibration cameras. The Bouguet algorithm is generally used and only this Bouguet algorithm is discussed here.
The rotation matrix R and the translation matrix T can be known from the three-dimensional calibration and the rotation matrix R of each monocular calibrationl、RrAnd translation matrix Tl、TrThe relationship is as follows:
Figure BDA0003277630210000191
T=Tr-RTlformula (28)
The concrete steps of the Bouguet algorithm are as follows:
s901: cutting the rotation matrix R of the stereo-calibrated right image plane relative to the left image plane into two halves, and rotating the left image by half RlRight image rotated by half rr(ii) a Thus, the reprojection distortion is smaller, and the common area of the left view and the right view is the largest. Wherein
Figure BDA0003277630210000192
Figure BDA0003277630210000193
Is that
Figure BDA0003277630210000194
The matrix is the inverse of the mean square matrix,
Figure BDA0003277630210000195
and
Figure BDA0003277630210000196
called the composite rotation matrix of the left and right cameras.
S902: the imaging planes of the left and right cameras are now parallel, but the epipolar lines are not aligned in parallel. To align epipolar lines in parallel, we construct a unity orthogonal transformation matrix RrectThe left camera pole is transformed to infinity and the epipolar lines are aligned in parallel.
Setting a construction unit orthogonal transformation matrix R capable of transforming poles to infinityrectThe following were used:
Figure BDA0003277630210000201
wherein the content of the first and second substances,
Figure BDA0003277630210000202
to construct a unit orthogonal transformation matrix RrectThree sets of unit vectors.
The translation vector between the projection centers of the left camera and the right camera is the left pole direction, and then the structure is formed
Figure BDA0003277630210000203
Comprises the following steps:
Figure BDA0003277630210000204
vector
Figure BDA0003277630210000205
Should be and
Figure BDA0003277630210000206
orthogonal, selected along the image plane and orthogonalThe direction in the optical axis is better:
Figure BDA0003277630210000207
vector
Figure BDA0003277630210000208
The cross-product structure can be used to obtain:
Figure BDA0003277630210000209
s903: unit orthogonal transformation matrix RrectAfter construction, the row alignment can be achieved by finally transforming the image plane using the following matrix. The rotation matrix finally obtained by the stereo correction is as follows:
Rl′=Rrect·rlformula (32)
R′r=Rrect·rrFormula (33)
Wherein r isl、rrIs a composite rotation matrix of left and right cameras, Rl′、R′rThe final rotation matrixes of the left camera and the right camera after the stereo correction are respectively.
b. Reprojection matrix
Double-shot stereo rectification except for calculating final rotation matrix Rl'and R'rIn addition, in the application of three-dimensional reconstruction and the like, projection matrixes of main shooting and auxiliary shooting, effective rectangular areas of the main shooting and the auxiliary shooting, re-projection matrixes and the like are calculated. Only a simple derivation of the computation of the reprojection matrix is performed here.
The reprojection matrix may map two-dimensional points in the image plane back to three-dimensional coordinates in the physical world. In fig. 17, from triangulation and triangle similarity theorems we can see:
Figure BDA00032776302100002010
wherein (X, Y, Z) isCoordinates in the world coordinate system, (x, y) are coordinates in the image coordinate system, (c)x,cy) Is the optical center (principal point) of the camera.
Order to
Figure BDA0003277630210000211
Then there are:
Figure BDA0003277630210000212
wherein, (X ', Y', Z ') is a reprojection coordinate without normalization, W is a normalization parameter in a transformation process, and W' is a final normalization parameter; c'xThe coordinates of the optical center in the x direction in the renormalized reprojection coordinate system are obtained; the reprojection matrix Q is defined as:
Figure BDA0003277630210000213
finally, normalized reprojection three-dimensional coordinates (X '/W', Y '/W', Z '/W') can be obtained.
It should be understood that although the various steps in the flowcharts of fig. 3, 7, 18 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 3, 7, and 18 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
As shown in fig. 21, as another aspect, the present application provides a calibration system 100 for a binocular camera, including:
a calibration board design module 110 configured to design a checkerboard as a calibration board;
the image acquisition module 120 is configured to divide the binocular camera into a main camera and an auxiliary camera, and the main camera and the auxiliary camera are used for respectively acquiring checkerboard pictures on the calibration plate and respectively obtaining a main shot picture and an auxiliary shot picture correspondingly;
the monocular calibration module 130 is configured to perform monocular calibration on the main camera according to the main shot picture to obtain a main camera internal and external parameter matrix and a main camera distortion coefficient matrix; the auxiliary camera monocular calibration is carried out according to the auxiliary shot picture, and an internal and external parameter matrix of the auxiliary camera and a distortion coefficient matrix of the auxiliary camera are obtained;
and a stereo calibration module 140 configured to perform stereo calibration on the main camera and the auxiliary camera to obtain a rotation matrix R, a translation matrix T, an eigen matrix E, and a basis matrix F.
In the embodiment of the application, the monocular calibration module is arranged in front of the stereo calibration module, and before the double-shot stereo calibration, the main camera and the auxiliary camera are respectively subjected to monocular calibration to obtain the internal parameter, the external parameter and the distortion parameter of each monocular camera, so that the obtained internal parameter error is small, and the double-shot calibration result is relatively stable.
Specifically, the calibration board design module 110 is configured to configure a calibration board for design, where the calibration board includes at least four chequers, and the angular direction and the position posture of each chequer in the calibration board are different. Through the calibration board designed by the calibration board design module 110, more than four checkerboards can be captured at one time for double shooting calibration, so that the calibration efficiency is improved, and the risk of calibration failure is reduced.
Specifically, the image obtaining module 120 is configured to, when the field angles of the binocular cameras are different, divide a camera with a large field angle in the binocular cameras into a sub-camera, and divide a camera with a small field angle into a main camera; and acquiring checkerboard pictures on the calibration board by taking the auxiliary camera as a reference, taking the checkerboard pictures on the calibration board acquired by the auxiliary camera as auxiliary shooting pictures, and taking the checkerboard pictures on the calibration board acquired by the main camera as main shooting pictures. The image acquisition module can keep the pictures acquired by the camera with the large FOV at the preset pattern size and the checkerboard as full as possible, and the pictures acquired by the camera with the small FOV can not keep the preset pattern size but the checkerboard as full as possible.
Specifically, referring to fig. 22, the monocular calibration module 130 includes:
the acquisition unit 131 is configured to detect checkerboard and checkerboard corner points in the main shot picture and the auxiliary shot picture, respectively, by using a growth-based checkerboard corner point detection algorithm.
The reordering unit 132 is configured to reorder the checkerboard corner points and the checkerboard corner points in the detected main shot image and the detected sub shot picture respectively, and correspondingly obtain the checkerboard corner points and the checkerboard corner points of the predetermined pattern sequence of the main shot picture and the checkerboard corner points of the predetermined pattern sequence of the sub shot picture.
The double-shot calibration algorithm unit 133 is configured to align the checkerboard and checkerboard corner points of the predetermined pattern sequence of the main shot picture regularly, and then calculate the internal and external parameters and distortion coefficients of the main camera to obtain a main shot internal and external parameter matrix and a main shot distortion coefficient matrix.
The bi-camera calibration algorithm unit 133 is further configured to calculate the inside and outside parameters and distortion coefficients of the sub-camera to obtain a sub-camera inside and outside parameter matrix and a main camera distortion coefficient matrix, where the checkerboard and the checkerboard corner points are used for defining a predetermined pattern sequence of the sub-camera picture.
Furthermore, the monocular calibration module further comprises a monocular calibration error calculation unit, wherein the monocular calibration error calculation unit is configured to calculate checkerboard corner reprojection corner points according to the internal and external parameters and the distortion coefficient of the main camera, and then calculate the monocular calibration error of the main camera according to the actual three-dimensional corner point coordinates and the reprojection corner point coordinates.
The monocular calibration error calculation unit is configured to calculate the chessboard angular point reprojection angular point according to the internal and external parameters and the distortion coefficient of the auxiliary camera, and then calculate the monocular calibration error of the auxiliary camera according to the actual three-dimensional angular point coordinates and the reprojection angular point coordinates.
Specifically, referring to fig. 23, the calibration system of the binocular camera further includes a stereo correction module 150, and the stereo correction module is configured to perform stereo correction on the main camera and the auxiliary camera by using a Bouguet algorithm. The specific Bouguet algorithm is described in the binocular camera calibration method.
All modules in the calibration system of the binocular camera can be completely or partially realized through software, hardware and a combination of the software and the hardware. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an electronic device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 24. The electronic device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, Near Field Communication (NFC) or other technologies. The computer program is executed by a processor to implement a calibration method for a binocular camera. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 24 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the calibration system for the binocular camera provided by the present application may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 24. The memory of the computer device may store various program modules constituting the calibration system of the binocular camera, such as a calibration board design module, an image acquisition module, a monocular calibration module, and a stereo calibration module shown in fig. 21. The program modules constitute computer programs to make the processor execute the steps of the calibration method of the binocular camera according to the embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 24 may perform step S10 by the calibration board design module in the calibration system of the binocular camera shown in fig. 21. The computer device may perform step S30 through the image acquisition module. The computer apparatus may perform step S50 through the monocular calibration module. The computer device may perform step S70 through the stereo calibration module.
In one embodiment, an electronic device is further provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is also provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), and the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. The calibration method of the binocular camera is characterized by comprising the following steps:
s10: taking the checkerboard as a calibration board;
s30: dividing a binocular camera into a main camera and an auxiliary camera, respectively acquiring checkerboard pictures on a calibration plate by adopting the main camera and the auxiliary camera, and respectively obtaining a main shot picture and an auxiliary shot picture correspondingly;
s50: performing monocular calibration on a main camera according to the main shot picture to obtain an internal and external parameter matrix of the main camera and a distortion coefficient matrix of the main camera;
performing monocular calibration on the auxiliary camera according to the auxiliary shot picture to obtain an internal and external parameter matrix of the auxiliary camera and a distortion coefficient matrix of the auxiliary camera;
s70: and then respectively carrying out three-dimensional calibration on the main camera and the auxiliary camera to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E and a basic matrix F.
2. The binocular camera calibration method according to claim 1, wherein in step S10, the calibration board comprises at least four chequers, and the angular direction and the position posture of each chequer in the calibration board are different.
3. The binocular camera calibration method according to claim 1, wherein in step S30, when the angles of view of the binocular cameras are different, the camera with the large angle of view is divided into a sub-camera, and the camera with the small angle of view is divided into a main camera;
and acquiring checkerboard pictures on the calibration board by taking the auxiliary camera as a reference, taking the checkerboard pictures on the calibration board acquired by the auxiliary camera as auxiliary shooting pictures, and taking the checkerboard pictures on the calibration board acquired by the main camera as main shooting pictures.
4. The binocular camera calibration method according to claim 1, wherein the step S50 includes the following substeps:
s51: using a growth-based checkerboard angular point detection algorithm to respectively detect checkerboard angular points and checkerboard angular points in the collected main shot picture and the collected auxiliary shot picture;
s52: the detected checkerboard and checkerboard corner points in the main shot image and the auxiliary shot image are respectively reordered to correspondingly obtain the checkerboard and checkerboard corner points of the preset pattern sequence of the main shot image and the checkerboard corner points of the preset pattern sequence of the auxiliary shot image;
s53: regularly aligning the checkerboard and checkerboard corner points of the preset pattern sequence of the main shot picture, and then calculating the internal and external parameters and the distortion coefficient of the main camera to obtain a main shot internal and external parameter matrix and a main shot distortion coefficient matrix;
and calculating the inner and outer parameters and the distortion coefficient of the auxiliary camera to obtain an auxiliary shooting inner and outer parameter matrix and a main shooting distortion coefficient matrix.
5. The binocular camera calibration method according to claim 4, wherein in the substep S53, a binocular calibration algorithm is used to calculate the inside and outside parameters and distortion coefficients of the main camera and the inside and outside parameters and distortion coefficients of the sub-camera, respectively; the distortion coefficients include a radial distortion coefficient and a tangential distortion coefficient.
6. The binocular camera calibration method according to claim 1, further comprising a step S90 after the step S70, wherein the Bouguet algorithm is used to perform stereo correction on the primary and secondary cameras.
7. Calibration system of binocular camera, its characterized in that includes:
the calibration plate design module is configured for designing a checkerboard as a calibration plate;
the image acquisition module is configured for dividing the binocular camera into a main camera and an auxiliary camera, and adopting the main camera and the auxiliary camera to respectively acquire checkerboard pictures on the calibration plate and respectively obtain a main shot picture and an auxiliary shot picture correspondingly;
the monocular calibration module is configured and used for performing monocular calibration on the main camera according to the main shooting picture to obtain an internal and external parameter matrix of the main camera and a distortion coefficient matrix of the main camera; the auxiliary camera monocular calibration is carried out according to the auxiliary shot picture, and an internal and external parameter matrix of the auxiliary camera and a distortion coefficient matrix of the auxiliary camera are obtained;
and the three-dimensional calibration module is configured for performing three-dimensional calibration on the main camera and the auxiliary camera to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E and a basic matrix F.
8. The binocular camera calibration system of claim 7, further comprising a stereo correction module configured to perform stereo correction on the primary and secondary cameras using a Bouguet algorithm.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the calibration method for binocular cameras according to any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the calibration method for binocular cameras according to any one of claims 1 to 6.
CN202111122839.9A 2021-09-24 2021-09-24 Calibration method and system of binocular camera, electronic equipment and storage medium Pending CN113808220A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111122839.9A CN113808220A (en) 2021-09-24 2021-09-24 Calibration method and system of binocular camera, electronic equipment and storage medium
PCT/CN2021/140186 WO2023045147A1 (en) 2021-09-24 2021-12-21 Method and system for calibrating binocular camera, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111122839.9A CN113808220A (en) 2021-09-24 2021-09-24 Calibration method and system of binocular camera, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113808220A true CN113808220A (en) 2021-12-17

Family

ID=78940401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111122839.9A Pending CN113808220A (en) 2021-09-24 2021-09-24 Calibration method and system of binocular camera, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113808220A (en)
WO (1) WO2023045147A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830118A (en) * 2022-12-08 2023-03-21 重庆市信息通信咨询设计院有限公司 Crack detection method and system for cement electric pole based on binocular camera
WO2023045147A1 (en) * 2021-09-24 2023-03-30 上海闻泰电子科技有限公司 Method and system for calibrating binocular camera, and electronic device and storage medium
CN116030145A (en) * 2023-03-23 2023-04-28 北京中科慧眼科技有限公司 Stereo matching method and system for binocular lenses with different focal lengths

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721339B (en) * 2023-04-24 2024-04-30 广东电网有限责任公司 Method, device, equipment and storage medium for detecting power transmission line
CN116862999B (en) * 2023-09-04 2023-12-08 华东交通大学 Calibration method, system, equipment and medium for three-dimensional measurement of double cameras
CN117190875A (en) * 2023-09-08 2023-12-08 重庆交通大学 Bridge tower displacement measuring device and method based on computer intelligent vision
CN117152274B (en) * 2023-11-01 2024-02-09 三一重型装备有限公司 Pose correction method and system for binocular camera of heading machine and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655094B2 (en) * 2011-05-11 2014-02-18 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Photogrammetry system and method for determining relative motion between two bodies
CN108053450B (en) * 2018-01-22 2020-06-30 浙江大学 High-precision binocular camera calibration method based on multiple constraints
CN110969668B (en) * 2019-11-22 2023-05-02 大连理工大学 Stereo calibration algorithm of long-focus binocular camera
CN111383194B (en) * 2020-03-10 2023-04-21 江苏科技大学 Polar coordinate-based camera distortion image correction method
CN112634374B (en) * 2020-12-18 2023-07-14 杭州海康威视数字技术股份有限公司 Stereoscopic calibration method, device and system for binocular camera and binocular camera
CN113808220A (en) * 2021-09-24 2021-12-17 上海闻泰电子科技有限公司 Calibration method and system of binocular camera, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045147A1 (en) * 2021-09-24 2023-03-30 上海闻泰电子科技有限公司 Method and system for calibrating binocular camera, and electronic device and storage medium
CN115830118A (en) * 2022-12-08 2023-03-21 重庆市信息通信咨询设计院有限公司 Crack detection method and system for cement electric pole based on binocular camera
CN115830118B (en) * 2022-12-08 2024-03-19 重庆市信息通信咨询设计院有限公司 Crack detection method and system for cement electric pole based on binocular camera
CN116030145A (en) * 2023-03-23 2023-04-28 北京中科慧眼科技有限公司 Stereo matching method and system for binocular lenses with different focal lengths

Also Published As

Publication number Publication date
WO2023045147A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN113808220A (en) Calibration method and system of binocular camera, electronic equipment and storage medium
JP6859442B2 (en) Calibration equipment, calibration system, and calibration method
US8436904B2 (en) Method and apparatus for calibrating video camera
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
CN102598652B (en) Messaging device and method
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
CN105379264B (en) The system and method with calibrating are modeled for imaging device
TWI555378B (en) An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN106127745B (en) The combined calibrating method and device of structure light 3 D vision system and line-scan digital camera
CN110111262A (en) A kind of projector distortion correction method, device and projector
Sajadi et al. Auto-calibration of cylindrical multi-projector systems
US20150248744A1 (en) Image processing device, image processing method, and information processing device
US10063792B1 (en) Formatting stitched panoramic frames for transmission
CN109920004A (en) Image processing method, device, the combination of calibration object, terminal device and calibration system
CN111340737B (en) Image correction method, device and electronic system
CN108269234B (en) Panoramic camera lens attitude estimation method and panoramic camera
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN109495733B (en) Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN110490943B (en) Rapid and accurate calibration method and system of 4D holographic capture system and storage medium
US20090059018A1 (en) Navigation assisted mosaic photography
CN109785225B (en) Method and device for correcting image
CN113870163B (en) Video fusion method and device based on three-dimensional scene, storage medium and electronic device
Gurrieri et al. Stereoscopic cameras for the real-time acquisition of panoramic 3D images and videos
JP2002135807A (en) Method and device for calibration for three-dimensional entry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination