WO2023045147A1 - 双目摄像机的标定方法、系统、电子设备和存储介质 - Google Patents

双目摄像机的标定方法、系统、电子设备和存储介质 Download PDF

Info

Publication number
WO2023045147A1
WO2023045147A1 PCT/CN2021/140186 CN2021140186W WO2023045147A1 WO 2023045147 A1 WO2023045147 A1 WO 2023045147A1 CN 2021140186 W CN2021140186 W CN 2021140186W WO 2023045147 A1 WO2023045147 A1 WO 2023045147A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
checkerboard
calibration
main
sub
Prior art date
Application number
PCT/CN2021/140186
Other languages
English (en)
French (fr)
Inventor
高永基
Original Assignee
上海闻泰电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海闻泰电子科技有限公司 filed Critical 上海闻泰电子科技有限公司
Publication of WO2023045147A1 publication Critical patent/WO2023045147A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present disclosure relates to a binocular camera calibration method, system, electronic equipment and storage medium.
  • the calibration of camera parameters is a very critical link.
  • the accuracy of the calibration results and the stability of the algorithm directly affect the accuracy of the results produced by the camera. Therefore, doing a good job in camera calibration is the premise of doing a good job in follow-up work, and improving the calibration accuracy is the focus of scientific research and production work.
  • Smartphones are now increasingly using a combination of multiple cameras, where any two cameras can be combined to achieve a dual-camera function to achieve image quality improvement, background blur, optical zoom, 3D reconstruction and other functions.
  • the key technology of the multi-camera technology solution for smartphones is the dual-camera technology solution
  • the dual-camera calibration is the key link of the dual-camera technology solution, so the importance of dual-camera calibration in the multi-camera technology solution for smartphones is becoming more and more important. protrude.
  • Dual-camera calibration means that in the process of image measurement and machine vision applications, in order to determine the relationship between the geometric position of a point on the surface of a space object in three-dimensional space and its corresponding point in the image, a geometric model of camera imaging must be established.
  • These geometric model parameters are camera parameters (internal parameters, external parameters, distortion parameters). In most cases, these parameters must be obtained through experiments and calculations.
  • This process of solving geometric model parameters is called camera calibration (or camera calibration).
  • the so-called dual camera calibration is to calibrate two cameras (cameras). process.
  • the current dual-camera calibration technology directly performs the stereo calibration of the main and auxiliary cameras, and directly obtains the internal and external parameters of the main and secondary cameras.
  • the internal parameters obtained in this way have large errors and unstable calibration results.
  • the current dual-camera calibration technology has large internal reference errors and unstable calibration results.
  • a calibration method, system, electronic device and storage medium of a binocular camera are provided.
  • a calibration method for a binocular camera comprising the following steps:
  • S30 Divide the binocular camera into a main camera and an auxiliary camera, use the main camera and the auxiliary camera to respectively collect checkerboard pictures on the calibration board, and respectively obtain the main picture and the auxiliary picture correspondingly;
  • S50 Perform single-target positioning of the main camera according to the main camera picture, and obtain a matrix of internal and external parameters of the main camera and a matrix of distortion coefficients of the main camera;
  • S70 Perform stereo calibration of the main camera and the sub-camera respectively, to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E, and a fundamental matrix F.
  • a calibration system for binocular cameras including:
  • the calibration board design module is configured to design a checkerboard grid as the calibration board
  • the image acquisition module is configured to divide the binocular camera into a main camera and a sub-camera, adopt the main camera and the sub-camera to collect the checkerboard pictures on the calibration board respectively, and obtain the main photo and the sub-camera correspondingly;
  • the single-target positioning module is configured to perform single-target positioning of the main camera according to the main camera picture, and obtain the internal and external parameter matrix of the main camera and the distortion coefficient matrix of the main camera; and perform single-target positioning of the secondary camera according to the secondary camera picture to obtain Camera internal and external parameter matrix and secondary camera distortion coefficient matrix;
  • the stereo calibration module is configured to perform stereo calibration of the main camera and the sub-camera to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E, and a fundamental matrix F.
  • An electronic device including a memory and one or more processors, the memory stores computer-readable instructions, and the one or more processors execute the computer-readable instructions to implement the method provided by any embodiment of the present disclosure.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions, when the computer-readable instructions are executed by one or more processors, the calibration of the binocular camera provided by any embodiment of the present disclosure is realized method steps.
  • FIG. 1 is a schematic diagram of an existing three-dimensional reconstruction process
  • FIG. 2 is a schematic diagram of a dual-camera calibration process of an existing smart phone
  • FIG. 3 is a schematic diagram of a dual-camera calibration process of a smart phone provided by one or more embodiments of the present disclosure
  • Fig. 4 is a schematic diagram of four checkerboard calibration boards provided by one or more embodiments of the present disclosure.
  • FIG. 5 is a photo of four checkerboard calibration boards provided by one or more embodiments of the present disclosure.
  • Figure 6 is an example of pictures captured by heterogeneous dual cameras with and without maintaining a predetermined pattern size; among them, picture (a) is a small FOV main camera picture collected by a heterogeneous dual camera with a predetermined pattern size; picture (b) is a heterogeneous The large FOV sub-camera image collected by the dual-camera keeping the predetermined pattern size; picture (c) is the small FOV main-photograph picture collected by the heterogeneous dual-camera without keeping the predetermined pattern size; picture (d) is the heterogeneous dual-camera not keeping the predetermined pattern size Large FOV sub-photographs collected with a larger pattern size;
  • FIG. 7 is a schematic flowchart of step S50 in FIG. 3;
  • FIG. 8 is a UI interface design diagram of the dual-camera calibration APK provided by one or more embodiments of the present disclosure.
  • FIG. 9 is a photo of the actual effect of the UI interface of the dual-camera calibration APK provided by one or more embodiments of the present disclosure.
  • FIG. 10 is a schematic diagram of a pinhole camera model provided by one or more embodiments of the present disclosure.
  • Fig. 11 is a schematic diagram of transformation of four major coordinate systems of a pinhole camera model provided by one or more embodiments of the present disclosure
  • Figure 12 is a schematic diagram of radial distortion provided by one or more embodiments of the present disclosure; wherein, Figure (a) is a schematic diagram of no radial distortion; Figure (b) is a schematic diagram of radial barrel distortion; Figure (c) is a schematic diagram of radial distortion Schematic diagram of pincushion distortion;
  • Fig. 13 is a schematic diagram of a radial distortion model provided by one or more embodiments of the present disclosure.
  • Fig. 14 is a schematic diagram of a tangential distortion model provided by one or more embodiments of the present disclosure.
  • FIG. 15 is a diagram of a dual-camera model provided by one or more embodiments of the present disclosure.
  • Fig. 16 is a diagram of an epipolar geometric model provided by one or more embodiments of the present disclosure.
  • Fig. 17 is a schematic diagram of dual camera translation and rotation provided by one or more embodiments of the present disclosure.
  • FIG. 18 is an exemplary flowchart of another preferred implementation of a dual-camera calibration method for a smartphone provided by one or more embodiments of the present disclosure
  • FIG. 19 is a schematic diagram of an ideal dual-camera stereo device provided by one or more embodiments of the present disclosure.
  • Fig. 20 is a schematic diagram after stereo correction provided by one or more embodiments of the present disclosure.
  • Fig. 21 is a structural diagram of a binocular camera calibration system provided by one or more embodiments of the present disclosure.
  • Fig. 22 is a specific structural diagram of a single-target targeting module provided by one or more embodiments of the present disclosure.
  • Fig. 23 is an exemplary structural diagram of another preferred implementation of a binocular camera calibration system provided by one or more embodiments of the present disclosure.
  • Fig. 24 is a schematic diagram of an internal structure of an electronic device provided by one or more embodiments of the present disclosure.
  • binocular vision mainly includes five parts: camera calibration, picture distortion correction, camera correction, picture matching, and 3D reconstruction.
  • the purpose of dual-camera calibration which is the core part of the entire project, has two purposes:
  • one of the purposes of dual-camera calibration is to clarify the transformation relationship and solve the internal and external parameter matrix. Since it is a dual-camera calibration of a mobile phone, it is necessary to solve the internal parameter matrix of the dual-camera of the mobile phone and the external parameter matrix between the dual-cameras.
  • the perspective projection of the camera has a big problem—distortion, so another purpose of dual camera calibration is to solve the distortion parameters and then use them for image correction.
  • FIG. 3 shows an exemplary flowchart of a calibration method for a binocular camera according to an embodiment of the present disclosure.
  • the calibration method of the binocular camera provided by the present disclosure includes:
  • S30 Divide the binocular camera into a main camera and a sub-camera, use the main camera and the sub-camera to respectively collect checkerboard pictures on the calibration board, and obtain corresponding main and sub-camera pictures respectively.
  • S50 Perform single-target positioning of the main camera according to the main camera picture, and obtain a matrix of internal and external parameters of the main camera and a matrix of distortion coefficients of the main camera.
  • the sub-camera single-target positioning is performed according to the sub-camera image, and the sub-camera internal parameter and extrinsic matrix and the sub-camera distortion coefficient matrix are obtained.
  • S70 Perform stereo calibration of the main camera and the sub-camera respectively, to obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E, and a fundamental matrix F.
  • the usual dual-camera calibration technology directly obtains the internal and external parameters of the main and secondary cameras.
  • the internal parameters obtained in this way have large errors and unstable calibration results.
  • the embodiments of the present disclosure improve the process of the dual-target calibration algorithm.
  • single-target calibration is performed on the main and secondary cameras to obtain the internal and external parameters and distortion parameters of each monocular camera.
  • the camera calibration results are also relatively more stable.
  • the calibration board contains at least four checkerboards, and the angle, direction, position and posture of each checkerboard in the calibration board are different.
  • step S10 when the calibration board contains four checkerboards, the four checkerboards are respectively located at the upper left corner, upper right corner, lower left corner and lower right corner of the calibration board; wherein, the checkerboard in the upper left corner
  • the grid remains horizontal and vertical, with the checkerboard in the upper left corner as a reference, the checkerboard in the upper right corner is rotated to the first preset angle along its own central axis, and the checkerboard in the lower left corner is rotated to the left along its own central axis by a second preset angle.
  • the checkerboard in the lower right corner is rotated upwards by a third preset angle along its central axis.
  • the usual dual-camera calibration technology often only prepares a checkerboard calibration board, and then performs single- or dual-target calibration by capturing multiple checkerboard calibration board pictures at different angles, directions, and positions.
  • only one checkerboard grid is captured each time for single-eye or double-target calibration, the calibration efficiency is low, and the risk of calibration failure is high.
  • step S10 the disclosure improves the checkerboard calibration board.
  • four checkerboard calibration boards are selected for calibration in the double-camera calibration.
  • Each checkerboard in the four checkerboard calibration boards has different angles and directions and different positions.
  • Posture as shown in Figure 4-5.
  • Fig. 4 is a schematic diagram of four checkerboards as calibration boards
  • Fig. 5 shows photos of four checkerboards as calibration boards.
  • the checkerboard in the upper left corner is kept horizontal and vertical.
  • the placement of the other three checkerboards is as follows: the checkerboard in the upper right corner is rotated 30° to the right along its own axis, and the checkerboard in the lower left corner is rotated along its own axis.
  • the central axis is rotated 30° to the left, and the checkerboard in the lower right corner is rotated 30° upward along its own central axis, and the distance between each checkerboard is 1 to 3 times the side length of the black and white squares.
  • the specifications and dimensions of each checkerboard are: the size of the pattern is 14x19 (the effective corner point is 13x18); the length of the black and white square is 15mm.
  • the present disclosure takes four checkerboard calibration boards as an example for illustration, and in practical applications, six, eight or more checkerboard calibration boards can also be selected for calibration.
  • the present disclosure adopts a standard black and white checkerboard of 14x19 for calibration, and may also use calibration boards of checkerboards of different shapes and styles for calibration.
  • the side length of the black and white squares in the present disclosure is 15 mm, and black and white squares with different side lengths may also be used.
  • the specific rotation direction and rotation angle of each checkerboard grid on the calibration board can also be adjusted according to actual needs.
  • step S30 when the viewing angles of the binocular cameras are different, the cameras with large viewing angles in the binocular cameras are divided into sub-cameras, and the cameras with small viewing angles are divided into main cameras;
  • the checkerboard picture on the calibration board For referring to collecting the checkerboard picture on the calibration board, the checkerboard picture on the calibration board collected by the sub-camera is used as the sub-photograph, and the checkerboard picture on the calibration board collected by the main camera is used as the main photo.
  • the usual dual-camera calibration technology can only do isomorphic dual-camera calibration (dual target calibration with the same focal length and the same resolution), so that you can use the open source dual-target calibration algorithm toolset without re-developing the dual-target calibration core algorithm.
  • the usual dual-camera calibration technology cannot do heterogeneous dual-camera calibration (two-target calibration with different focal lengths and different resolutions), and most open source dual-camera calibration algorithm toolsets do not support heterogeneous dual-camera calibration.
  • step S30 the present disclosure improves the heterogeneous dual-camera calibration.
  • the dual-camera calibration scheme of the present disclosure redevelops the technical solution, algorithm flow and core algorithm of dual-camera calibration for heterogeneous dual-camera calibration. For homogeneous, especially heterogeneous The dual-camera calibration is applicable.
  • the checkerboard images collected by the main and auxiliary cameras maintain a predetermined pattern size (to facilitate the detection of corner points by the usual checkerboard corner detection algorithm). And because the focal length and resolution are the same, the checkerboard can occupy as much of the picture as possible in the main and secondary pictures.
  • the dual-camera calibration solution disclosed in the present disclosure abandons maintaining the predetermined pattern size for heterogeneous dual-camera calibration, but uses a camera with a large FOV (small focal length and high resolution) as a reference to collect pictures, so that the pictures collected by a camera with a large FOV remain predetermined
  • Figure 6(c) and Figure 6(d) are examples, the actual The size and FOV of this scheme can not keep the predetermined pattern size so that the checkerboard fills up the picture as much as possible).
  • step S50 includes the following sub-steps:
  • S51 Use a growth-based checkerboard corner detection algorithm to respectively detect the checkerboard and checkerboard corners in the collected main and sub-photographs.
  • S52 Reorder the detected checkerboards and checkerboard corners in the main photographic image and the sub-photograph respectively, corresponding to obtain the checkerboard and checkerboard corners of the predetermined style sequence of the main photographic picture, and the sub-photograph A predetermined pattern sequence of checkerboards and checkerboard corners for an image.
  • S53 Regularly align the checkerboard and checkerboard corners of the predetermined style sequence of the main camera picture, and then calculate the internal and external parameters and distortion coefficients of the main camera, and obtain the main camera internal and external parameter matrix and the main camera distortion coefficient matrix.
  • the checkerboard corner detection algorithm based on growth is mainly divided into three steps: 1) locating the position of the checkerboard corners; 2) refinement of sub-pixel-level corners and directions; 3) optimizing the energy function, Growth checkerboard.
  • Specific growth-based checkerboard corner detection algorithm reference papers (Geiger A, Moosmann F, Car et al.Automatic camera and range sensor calibration using a single shot[C]//Robotics and Automation(ICRA),2012 IEEE International Conference on.IEEE,2012:3936-3943) and reference website (http://www.cvlibs .net/software/libcbdetect/).
  • the library function of OpenCV is generally used to detect checkerboard corners, but it cannot detect checkerboard corners with uncertain checkerboard pattern size and the efficiency and accuracy of corner detection are limited.
  • the growth-based checkerboard corner detection algorithm provided by the embodiments of the present disclosure can detect checkerboard corners with uncertain checkerboard pattern size. .
  • checkerboards and checkerboard corners in the collected pictures are reordered from left to right and top to bottom. Because although the growth-based checkerboard corner detection method finds each checkerboard and checkerboard corner sequence (usually an uncertain sequence), the corner sequence in the checkerboard is not arranged according to a predetermined sequence, and the checkerboard is not arranged according to Arranged in a predetermined sequence, so each checkerboard grid and its corresponding checkerboard corner points need to be reordered.
  • the present disclosure is based on the development of a dual-camera calibration APK and an algorithm for single-target calibration of the main and secondary cameras; wherein, the APK is an Android application package (Android application package, APK), which is an Android operating system.
  • APK Android application package
  • Algorithm refers to an accurate and complete description of a problem-solving scheme, a series of clear instructions to solve a problem, and an algorithm represents a systematic method to describe the strategy mechanism for solving a problem.
  • the specific development of dual-camera calibration APK and algorithm is as follows:
  • the usual dual-camera calibration technology uses a third-party dual-camera calibration APK and algorithm, and its development, optimization, and application are limited.
  • the dual-camera calibration solution adopts the self-developed dual-camera calibration APK and algorithm, which has great flexibility in development, optimization and application (for example, it is convenient to switch between dual-camera calibrations of different modules, from "main camera + Depth” to "main camera + wide-angle” dual-camera calibration switch), which can optimize and expand the scope of application to the greatest extent.
  • it can also be used for other types of dual-camera calibration (such as rear main + wide-angle, rear main + macro, etc.).
  • Dual Camera Calibration is a dual camera calibration.
  • the main and auxiliary preview interface is mainly used to display the preview interface of the main camera checkerboard calibration board and the auxiliary camera checkerboard calibration board preview interface. It is always in the foreground running state during the entire APK running.
  • START camera button after pressing the START camera button, it will capture the current camera data of the main camera and secondary camera and execute the dual camera calibration algorithm in the background until the message display text box shows the message of calibration success or failure.
  • the label text box always displays the default text label during the whole APK operation, such as the three-line label “DualCamCalib V1.0.1”, “ALG:1.0.1” and “Z00667AA2" are always displayed in this solution.
  • the main core code of the whole APK is mainly divided into two parts: the java layer and the cpp layer.
  • the core code related to the double-camera calibration APK framework is mainly in the java layer, and the main implementation in the java layer is: the main and sub-camera preview interface calls and displays the main and sub-camera preview screens in real time; Save it as a picture and call the cpp layer dual-camera calibration algorithm; after the cpp layer dual-camera calibration algorithm is executed, the calibration result will be displayed in the message text box.
  • a jni java native interface
  • the core code of the dual-camera calibration algorithm is mainly in the cpp layer. This part mainly explains the pinhole camera model and image correction technology. Other related technologies of the dual-camera calibration core algorithm will be detailed in the subsequent introduction of the dual-camera calibration process.
  • the process of smartphone shooting is actually an optical imaging process, among which the pinhole camera model is the most used model for camera imaging.
  • the imaging process of the camera in the pinhole camera model involves four coordinate systems: world coordinate system, camera coordinate system, image coordinate system, pixel coordinate system and the transformation of these four coordinate systems.
  • World coordinate system (x w , y w , z w ): the frame of reference for the position of the target object.
  • the world coordinates can be freely placed according to the convenience of calculation, and the unit is the length unit such as mm.
  • the world coordinate system mainly has three purposes: to determine the position of the calibration object during calibration; as a system reference system for binocular vision, to give the relationship between the two cameras relative to the world coordinate system, so as to obtain the relative relationship between the cameras Relationship; as a container for the reconstructed three-dimensional coordinates, it stores the three-dimensional coordinates of the reconstructed object.
  • the world coordinate system is the first stop that takes objects in view into account.
  • Camera coordinate system (X c , Y c , Z c ): the coordinate system of the object measured from the camera's own angle.
  • the origin of the camera coordinate system is on the optical center of the camera, and the z-axis is parallel to the optical axis of the camera. It is the bridgehead for contact with the shooting object.
  • the object in the world coordinate system needs to undergo rigid body transformation to the camera coordinate system first, and then have a relationship with the image coordinate system. It is the link between the image coordinates and the world coordinates, and the unit is the length unit such as mm.
  • Image coordinate system (x, y) The center of the CMOS image plane is used as the coordinate origin. It is introduced to describe the projection and transmission relationship of the object from the camera coordinate system to the image coordinate system during the imaging process, so as to facilitate further obtaining the coordinates in the pixel coordinate system.
  • the image coordinate system expresses the position of pixels in the image in physical units (such as millimeters).
  • Pixel coordinate system (u, v) With the upper left corner of the CMOS image plane as the origin, it is introduced to describe the coordinates of the image point on the digital image (photo) after the object is imaged, and it is the information that is actually read from the camera The coordinate system in which it is located.
  • the pixel coordinate system is the image coordinate system in pixels.
  • (x w , y w , z w ) are the coordinates of a point p in the world coordinate system;
  • (u, v) are the coordinates in the pixel coordinate system corresponding to point p;
  • Z c (u, v, 1) is the coordinate in the camera coordinate system corresponding to point p;
  • (u 0 , v 0 ) is the optical center of the camera (main point, principal point);
  • f is the focal length of the camera, in mm unit;
  • (k u , k v ) represents the amount of pixels per millimeter in the (u, v) direction;
  • R is the rotation matrix, r 00 ⁇ r 22 are the elements in the rotation matrix R;
  • Lens distortion is mainly divided into radial distortion and tangential distortion, as well as thin lens distortion, etc., but none of them have a significant impact on radial and tangential distortion, so we only consider radial and tangential distortion here.
  • Radial distortion is the distortion distributed along the radius of the lens. The reason is that the light is more curved when it is far away from the center of the lens than near the center. This distortion is more obvious in ordinary and cheap lenses. Radial distortion is mainly Including barrel distortion and pincushion distortion.
  • Figure 12 is a schematic diagram of no distortion, barrel distortion and pincushion distortion respectively.
  • the distortion at the center of the image plane is 0, moving toward the edge along the lens radius, the distortion becomes more and more serious.
  • the mathematical model of distortion can be described by the first few terms of the Taylor series expansion around the principle point, usually using the first two terms, namely k 1 and k 2 , for lenses with large distortion, such as fisheye lenses , the third term k 3 and the fourth term k 4 can be added for description.
  • the adjustment formulas (4)-(5) are:
  • (x, y) are the coordinates of the image coordinate system before radial or tangential correction
  • (x 0 , y 0 ) are the coordinates of the image coordinate system after radial or tangential correction
  • the radius k 1 to k 4 are radial distortion parameters.
  • Figure 13 is a schematic diagram of the offset of points at different distances from the optical center after radial distortion of the lens. It can be seen that the farther away from the optical center, the greater the radial displacement, which means the greater the distortion. Near the optical center , with almost no offset.
  • Tangential distortion is caused by the fact that the lens itself is not parallel to the camera sensor plane (image plane) or the image plane. This is mostly caused by the installation deviation of the lens being glued to the lens module.
  • the distortion model can be described by two additional parameters p 1 and p 2 , such as equations (6)-(7):
  • p 1 and p 2 are tangential distortion parameters.
  • Figure 14 shows a schematic diagram of the tangential distortion of a certain lens.
  • the distortion displacement is symmetrical with respect to the line connecting the lower left corner and the upper right corner, indicating that the lens has a rotation angle perpendicular to this direction.
  • Figure 15 is a dual-camera model provided by the present disclosure, as shown in Figure 15 (Translation in the figure is translation), based on the aforementioned pinhole camera model, for the left and right cameras respectively:
  • (u 1 , v 1 ) is the coordinate in the left pixel coordinate system corresponding to point p
  • (u 2 , v 2 ) is the coordinate in the right pixel coordinate system corresponding to point p
  • Z c1 (u 1 , v 1 , 1) is the coordinate in the left camera coordinate system corresponding to point p
  • Z c2 (u 2 , v 2 , 1) is the coordinate in the right camera coordinate system corresponding to point p
  • Projection matrix for the right camera is the left camera projection matrix
  • the internal and external parameters and distortion parameters of the main and sub-photographs can be obtained.
  • substep S54 is also included:
  • the reprojection corners of the checkerboard corners will be calculated, and then the single target positioning error of the main camera will be calculated according to the actual three-dimensional corner coordinates and the reprojection corner coordinates.
  • the re-projected corners of the checkerboard corners will be calculated, and then the single-target positioning error of the sub-camera will be calculated according to the actual 3D corner coordinates and the re-projected corner coordinates.
  • the calculation of the single-target calibration error of the primary and secondary cameras has two functions: one is used as a measurement standard for the accuracy of the calibration; the other is used as a reference standard for the calibration accuracy in the debugging process.
  • the main and sub-camera stereo calibration needs to calibrate the parameters: the main and sub-camera internal parameter matrix, the main and sub-camera distortion coefficient matrix, the rotation matrix R, the translation matrix T, the intrinsic matrix E and the fundamental matrix F (main and sub-camera)
  • the sub-camera internal parameter matrix and the main-sub-camera distortion coefficient matrix have been obtained in the previous single-objective calibration).
  • binocular camera also needs to calibrate the relative relationship between the left and right camera coordinate systems, that is, it is necessary to obtain the rotation matrix R, translation matrix T, intrinsic matrix E and fundamental matrix F, as follows:
  • P l and P r have the following relationship:
  • P l is the coordinates of point P in the coordinate system of the left camera
  • P r is the coordinates of point P in the coordinate system of the right camera
  • R l and T l are the rotations of the left camera relative to the calibration object obtained through single-target calibration Matrix and translation matrix
  • R r and T r are the rotation matrix and translation matrix of the relative calibration object obtained by the right camera through single-target calibration
  • the superscript "-1" indicates the inverse of the matrix
  • the superscript "T” indicates the transposition of the matrix .
  • the left and right cameras perform single-target positioning respectively, and then R l , T l , R r , and T r can be obtained respectively.
  • the rotation matrix R and the translation matrix T between the left and right cameras can be obtained by inserting the above equation (9).
  • the intersection of the straight line connecting the projection center and the projective plane is called the epipole, and the dotted line connecting the projection point and the pole is called the epipolar line.
  • the two epipolar lines are obviously coplanar, and the plane where the two epipolar lines lie is called the epipolar plane.
  • Every 3D point in the camera view (point P in the figure) is contained in the polar surface that intersects each image.
  • the line produced by the intersection of the camera view and the polar surface is the epipolar line.
  • the epipolar constraints mean that once we know the epipolar geometry between the stereo experimental setups, the 2D search for matching features between two images can be transformed into a 1D search along the epipolar lines (in the case of applying triangulation principles to when). This will not only greatly save the computational cost, but also help us exclude many points that may generate false matches.
  • the intrinsic matrix E contains information about the translation (Translation) and rotation (Rotation) of the two cameras in the physical space.
  • the basic matrix F also contains the internal parameters of the two cameras, which can associate the two cameras in the pixel coordinate system.
  • the normal vector corresponding to a polar face is the vector corresponding to any point on the polar surface, is the vector corresponding to the fixed point on the polar surface, then according to the normal vector perpendicular to the corresponding plane:
  • the normal vector can be constructed by means of cross multiplication (a vertical vector is obtained by cross multiplication), and the above formula (14) can be rewritten as follows:
  • S is the transformation form of the translation matrix T.
  • f l and f r are the focal lengths of the left and right cameras
  • z l and z r are the values of the z components of the projection points corresponding to the left and right camera coordinate systems.
  • the left imaging point can be mapped to the other side through the intrinsic matrix E, but the intrinsic matrix E is a rank deficient matrix (a 3x3 matrix with rank 2), so only one point can be mapped to a straight line .
  • the intrinsic matrix E contains all the geometric information related to the two cameras, but does not include the intrinsic parameters of the cameras.
  • the vector in the derivation above It is just a point in the geometric sense, which is associated with the pixel point by the following formula:
  • M is the internal reference matrix
  • M l and M r are the internal reference matrices of the left and right cameras respectively.
  • the difference between the fundamental matrix F and the intrinsic matrix E is that the fundamental matrix F operates in the image pixel coordinates, and the intrinsic matrix E operates in the physical coordinate system. Similar to the intrinsic matrix E, the fundamental matrix F is also a rank-2 3x3 matrix.
  • step S90 is also included after step S70 , using the Bouguet algorithm to perform stereo correction on the primary and secondary cameras.
  • the primary and secondary camera internal reference matrix, primary and secondary camera distortion matrix, external parameter matrix, eigenmatrix and fundamental matrix obtained through single-target calibration and stereo calibration are to determine the geometric position of a point on the surface of a space object in three-dimensional space and its Interrelationships between corresponding points in the image. This link is to obtain an approximate ideal stereo device model through stereo correction on the basis of the previous ones, and then calculate parallax and depth information through triangulation. Let's talk about triangulation and stereo correction.
  • an ideal dual-camera stereo device has no image distortion, the image planes are coplanar, the optical axes are parallel, the focal length is the same, and the principal point has been calibrated to its proper position.
  • the cameras are arranged in parallel in front, that is, each pixel row is precisely aligned with the pixel row in the other camera, and a point P can be found in the physical world, and there are imaging points in the left and right images respectively.
  • x l and x r are the abscissa coordinates of a certain point p in the space corresponding to the projection point in the left and right image coordinate systems
  • B is the baseline (the distance from the origin of the left and right image coordinate systems)
  • f Focal length
  • Z is depth.
  • the translation vector between the projection centers of the left and right cameras is the direction of the left pole, then construct for:
  • r l , r r are the synthetic rotation matrices of the left and right cameras
  • R l ′, R r ′ are the final rotation matrices of the left and right cameras after stereo correction, respectively.
  • the projection matrix of the main and sub-cameras, the effective rectangular area of the main and sub-cameras, and the reprojection matrix are also calculated.
  • a simple derivation is made for the calculation of the reprojection matrix.
  • a reprojection matrix can map a 2D point in the image plane back to 3D coordinates in the physical world.
  • theorem according to triangulation and triangle similarity theorem:
  • (X, Y, Z) are coordinates in the world coordinate system
  • (x, y) are coordinates in the image coordinate system
  • (c x , cy ) is the optical center (principal point) of the camera.
  • (X', Y', Z') are the unnormalized reprojection coordinates
  • W is the normalization parameter in the transformation process
  • W' is the final normalization parameter
  • c′ x is the unnormalized The coordinates of the optical center in the x direction in the optimized reprojection coordinate system
  • the reprojection matrix Q is defined as:
  • the present disclosure provides a binocular camera calibration system 100, including:
  • the calibration board design module 110 is configured to design a checkerboard grid as the calibration board
  • the image acquisition module 120 is configured to divide the binocular camera into a main camera and a sub-camera, and use the main camera and the sub-camera to collect the checkerboard pictures on the calibration board respectively, and obtain the main photo and the sub-camera correspondingly;
  • the single-target positioning module 130 is configured to perform single-target positioning of the main camera according to the main camera picture to obtain the internal and external parameter matrix of the main camera and the distortion coefficient matrix of the main camera; and is configured to perform single-target positioning of the secondary camera according to the secondary camera picture to obtain Camera internal and external parameter matrix and secondary camera distortion coefficient matrix;
  • the stereo calibration module 140 is configured to perform stereo calibration of the main camera and the sub camera, and obtain a rotation matrix R, a translation matrix T, an intrinsic matrix E, and a fundamental matrix F.
  • the internal and external parameters and distortion parameters of each monocular camera are obtained, thus obtaining The internal reference error is small, and the dual-camera calibration results are relatively more stable.
  • the calibration board design module 110 is configured such that the designed calibration board includes at least four checkerboards, and the angle, direction, position and posture of each checkerboard in the calibration board are different.
  • the calibration board designed by the calibration board design module 110 can capture more than four checkerboard frames at a time for dual-camera calibration, which improves the calibration efficiency and reduces the risk of calibration failure.
  • the image acquisition module 120 is configured to, when the viewing angles of the binocular cameras are different, divide the camera with a large viewing angle among the binocular cameras into a secondary camera, and divide the camera with a small viewing angle into a main camera; Take the sub-camera as a reference to collect the checkerboard picture on the calibration board, use the checkerboard picture on the calibration board collected by the sub-camera as the sub-photographic picture, and use the checkerboard picture on the calibration board collected by the main camera as Main photo.
  • the pictures collected by the camera with a large FOV can maintain the predetermined pattern size and the checkerboard is as full as possible.
  • the pictures collected by the camera with a small FOV cannot maintain the predetermined pattern size but the checkerboard fills the picture as much as possible. .
  • the single target positioning module 130 includes:
  • the acquisition unit 131 is configured to use a growth-based checkerboard corner detection algorithm to respectively detect checkerboards and checkerboard corners in the collected main shot and sub-shots.
  • the reordering unit 132 is configured to reorder the detected checkerboard grids and checkerboard corners in the main photographic image and the secondary photographic picture, correspondingly to obtain the checkerboard grids and checkerboard corners of the predetermined style sequence of the main photographic picture points, and checkerboard and checkerboard corner points for a predetermined pattern sequence of secondary shots.
  • the dual-camera calibration algorithm unit 133 is configured to regularly align the checkerboard and checkerboard corners of the predetermined style sequence of the main camera picture, and then calculate the internal and external parameters and distortion coefficients of the main camera to obtain the main camera internal and external parameter matrix and the main camera. Distortion coefficient matrix.
  • the dual-camera calibration algorithm unit 133 is also configured to calculate the internal and external parameters and distortion coefficients of the secondary camera from the checkerboard and checkerboard corner points of the predetermined pattern sequence of the sub-camera to obtain the sub-camera internal and external parameter matrix and the main The camera distortion coefficient matrix.
  • the single target positioning module also includes a single target positioning error calculation unit configured to reproject the corner points of the calculated checkerboard grid according to the internal and external parameters and distortion coefficients of the main camera, and then According to the actual 3D corner coordinates and the reprojected corner coordinates, the single-target positioning error of the main camera is calculated.
  • the single-target positioning error calculation unit is configured to reproject the calculated checkerboard corner points according to the internal and external parameters and distortion coefficients of the sub-camera, and then calculate the single-target positioning of the sub-camera according to the actual three-dimensional corner point coordinates and the reprojected corner point coordinates. error.
  • the binocular camera calibration system further includes a stereo correction module 150 configured to perform stereo correction on the primary and secondary cameras using the Bouguet algorithm.
  • a stereo correction module 150 configured to perform stereo correction on the primary and secondary cameras using the Bouguet algorithm.
  • Each module in the calibration system of the above-mentioned binocular camera can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of one or more processors in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that one or more processors can call and execute the above The operation corresponding to the module.
  • an electronic device is provided.
  • the electronic device may be a terminal, and its internal structure may be as shown in FIG. 24 .
  • the electronic device includes a processor, a memory, a communication interface, a display screen and an input device connected through a system bus. Wherein, the processor of the electronic device is used to provide calculation and control capabilities.
  • the memory of the electronic device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer readable instructions.
  • the internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium.
  • the communication interface of the electronic device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, an operator network, near field communication (NFC) or other technologies.
  • WIFI wireless fidelity
  • NFC near field communication
  • the computer-readable instructions are executed by the processor, a method for calibrating the binocular camera is realized.
  • the display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the electronic device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the housing of the electronic device , and can also be an external keyboard, touchpad, or mouse.
  • FIG. 24 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation to the computer equipment on which the disclosed solution is applied.
  • the specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
  • the binocular camera calibration system provided by the present disclosure can be implemented in the form of a computer readable instruction, and the computer readable instruction can be run on a computer device as shown in FIG. 24 .
  • Each program module forming the calibration system of the binocular camera can be stored in the memory of the computer device, such as the calibration board design module, image acquisition module, single-target calibration module and stereo calibration module shown in Figure 21.
  • the computer-readable instructions constituted by each program module enable one or more processors to execute the steps in the binocular camera calibration method of each embodiment of the present disclosure described in this specification.
  • the computer device shown in FIG. 24 may execute step S10 through the calibration board design module in the binocular camera calibration system as shown in FIG. 21 .
  • the computer device can execute step S30 through the image acquisition module.
  • the computer device can execute step S50 through a single-target targeting module.
  • the computer device can execute step S70 through the stereo calibration module.
  • an electronic device including a memory and one or more processors, where computer-readable instructions are stored in the memory, and the above-mentioned methods are implemented when the one or more processors execute the computer-readable instructions Steps in the examples.
  • non-volatile computer-readable storage media storing computer-readable instructions.
  • the method embodiments described above are implemented. A step of.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本公开公开了一种双目摄像机的标定方法、系统、电子设备和存储介质,方法包括:以棋盘格为标定板;将双目摄像机划分为主摄像机和副摄像机,采用主摄像机和副摄像机分别采集标定板上的棋盘格图片,得到主摄图片和副摄图片;根据主摄图片进行主摄像机单目标定,获得主摄像机内外参数矩阵和主摄像机畸变系数矩阵;根据副摄图片进行副摄像机单目标定,获得副摄像机内外参数矩阵和副摄像机畸变系数矩阵;再分别进行主摄像机和副摄机的立体标定,获得旋转矩阵、平移矩阵、本征矩阵以及基础矩阵。本公开在双摄立体标定之前先对主副摄像机分别进行单目标定,获取每个单目摄像机的内外参数和畸变参数,得到的内参误差小、双摄标定结果相对更稳定。

Description

双目摄像机的标定方法、系统、电子设备和存储介质
本公开要求于2021年9月24日提交中国专利局、申请号为202111122839.9、发明名称为“双目摄像机的标定方法、系统、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及一种双目摄像机的标定方法、系统、电子设备和存储介质。
背景技术
无论是在图像测量还是在机器视觉应用中,相机参数的标定都是非常关键的环节,其标定结果的精度及算法的稳定性直接影响相机工作产生结果的准确性。因此,做好相机标定是做好后续工作的前提,提高标定精度是科研和生产工作的重点所在。
智能手机现在越来越多的采用多个摄像头组合的方案,其中任意两个摄像头搭配即可实现一种双摄功能,以达到提升画质、背景虚化、光学变焦、三维重建等功能。显而易见,智能手机的多摄技术方案的关键技术就是双摄技术方案,而双摄标定又是双摄技术方案的关键环节,所以双摄标定在智能手机多摄技术方案中的重要性越来越突出。
双摄标定是指在图像测量过程以及机器视觉应用中,为确定空间物体表面某点的在三维空间中的几何位置与其在图像中对应点之间的相互关系,必须建立相机成像的几何模型,这些几何模型参数就是相机参数(内参数、外参数、畸变参数)。在大多数情况下,这些参数必须通过实验与计算才能得到,这个求解几何模型参数的过程就称之为相机标定(或摄像机标定),所谓双摄标定就是对两个相机(摄像机)进行标定的过程。
现行的双摄标定技术,直接进行主副摄像机的立体标定,直接获取主副摄内外参,这样得到的内参误差大、标定结果不稳定。
发明内容
(一)要解决的技术问题
现行的双摄标定技术的内参误差大、标定结果不稳定。
(二)技术方案
根据本公开公开的各种实施例,提供一种双目摄像机的标定方法、系统、电子设备和存储介质
一种双目摄像机的标定方法,包括以下步骤:
S10:以棋盘格为标定板;
S30:将双目摄像机划分为主摄像机和副摄像机,采用所述主摄像机和副摄像机分别采集标定板上的棋盘格图片,分别对应得到主摄图片和副摄图片;
S50:根据所述主摄图片进行主摄像机单目标定,获得主摄像机内外参数矩阵和主摄像机畸变系数矩阵;
根据所述副摄图片进行副摄像机单目标定,获得副摄像机内外参数矩阵和副摄像机畸变系数矩阵;
S70:再分别进行所述主摄像机和所述副摄机的立体标定,获得旋转矩阵R、平移矩阵T、本征矩阵E以及基础矩阵F。
提供一种双目摄像机的标定系统,包括:
标定板设计模块,配置成设计以棋盘格为标定板;
图像获取模块,配置成将双目摄像机划分为主摄像机和副摄像机,采用所述主摄像机和副摄像机分别采集标定板上的棋盘格图片,分别对应得到主摄图片和副摄图片;
单目标定模块,配置成根据所述主摄图片进行主摄像机单目标定,获得主摄 像机内外参数矩阵和主摄像机畸变系数矩阵;并成根据所述副摄图片进行副摄像机单目标定,获得副摄像机内外参数矩阵和副摄像机畸变系数矩阵;
立体标定模块,配置成进行所述主摄像机和所述副摄机的立体标定,获得旋转矩阵R、平移矩阵T、本征矩阵E以及基础矩阵F。
一种电子设备,包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,所述一个或多个处理器执行所述计算机可读指令时实现本公开任意实施例所提供的双目摄像机的标定方法的步骤。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时实现本公开任意实施例所提供的双目摄像机的标定方法的步骤。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:
图1为现有的三维重建流程示意图;
图2为现有的智能手机的双摄标定流程示意图;
图3为本公开一个或多个实施例提供的一种智能手机的双摄标定流程示意图;
图4为本公开一个或多个实施例提供的四幅棋盘格标定板示意图;
图5为本公开一个或多个实施例提供的四幅棋盘格标定板照片;
图6为异构双摄保持和不保持预定的图样尺寸采集图片示例;其中,图(a)为异构双摄保持预定的图样尺寸采集的小FOV主摄图片;图(b)为异构双摄保持预定的图样尺寸采集的大FOV副摄图片;图(c)为异构双摄不保持预定的图样尺寸采集的小FOV主摄图片;图(d)为异构双摄不保持预定的图样尺寸采集的大FOV副摄图片;
图7为图3中步骤S50的具体流程示意图;
图8为本公开一个或多个实施例提供的双摄标定APK的UI界面设计图;
图9为本公开一个或多个实施例提供的双摄标定APK的UI界面实际效果照片;
图10为本公开一个或多个实施例提供的针孔相机模型示意图;
图11为本公开一个或多个实施例提供的针孔相机模型的四大坐标系变换示意图;
图12为本公开一个或多个实施例提供的径向畸变示意图;其中,图(a)为径向无畸变示意图;图(b)为径向桶形畸变示意图;图(c)为径向枕形畸变示意图;
图13为本公开一个或多个实施例提供的径向畸变模型示意图;
图14为本公开一个或多个实施例提供的切向畸变模型示意图;
图15为本公开一个或多个实施例提供的双摄模型图;
图16为本公开一个或多个实施例提供的对极几何模型图;
图17为本公开一个或多个实施例提供的双摄平移和旋转示意图;
图18为本公开一个或多个实施例提供的智能手机的双摄标定方法的另一优选实施方式的示例性流程示意图;
图19为本公开一个或多个实施例提供的理想的双摄立体装置示意图;
图20为本公开一个或多个实施例提供的立体校正后的示意图;
图21为本公开一个或多个实施例提供的双目摄像机的标定系统的结构图;
图22为本公开一个或多个实施例提供的单目标定模块的具体结构图;
图23为本公开一个或多个实施例提供的双目摄像机的标定系统的另一优选实施方式的示例性结构图;
图24为本公开一个或多个实施例提供的一种电子设备的内部结构示意图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与发明相关的部分。
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可 以相互组合。下面将参考附图并结合实施例来详细说明本公开。
以智能手机三维重建为例,首先就得利用双目摄像头进行空间定位,因此双摄标定是整个项目的核心部分。如图1所示的双目视觉的流程,双目视觉主要包括相机标定、图片畸变矫正、摄像机校正、图片匹配、三维重建五个部分。作为整个项目核心部分的双摄标定的目的有两个:
第一,还原摄像头成像的物体在真实世界的位置,需要知道世界中的物体到手机图像平面是如何变换的。也就是说双摄标定的目的之一就是为了弄清楚这种变换关系,求解内外参数矩阵。由于是手机双摄标定,所以就得求解手机双摄各自的内参矩阵以及双摄之间的外参矩阵。
第二,摄像机的透视投影有个很大的问题—畸变,所以双摄标定的另一个目的就是求解畸变参数,然后用于图像矫正。
如图2所示是一个现有的智能手机通常的双摄标定流程,其存在以下技术缺点:
(1)只准备一幅棋盘格标定板,每次只抓拍一幅棋盘格进行双目标定,标定效率低、标定失败风险大。
(2)双目标定直接获取主副摄内参,误差大、标定结果不稳定。
(3)只能做同构双摄标定(同焦距同分辨率的双目标定),不能做异构双摄标定(不同焦距不同分辨率的双目标定)。
(4)使用第三方的双摄标定APK和算法,开发、优化和应用受限。
参考图3,示出了根据本公开实施例提供的双目摄像机的标定方法的示例性流程框图。
如图3所示,在本实施例中,本公开提供的双目摄像机的标定方法包括:
S10:以棋盘格为标定板。
S30:将双目摄像机划分为主摄像机和副摄像机,采用所述主摄像机和副摄像机分别采集标定板上的棋盘格图片,分别对应得到主摄图片和副摄图片。
S50:根据所述主摄图片进行主摄像机单目标定,获得主摄像机内外参数矩阵和主摄像机畸变系数矩阵。
根据所述副摄图片进行副摄像机单目标定,获得副摄像机内参外数矩阵和副摄像机畸变系数矩阵。
S70:再分别进行所述主摄像机和所述副摄机的立体标定,获得旋转矩阵R、平移矩阵T、本征矩阵E以及基础矩阵F。
通常的双摄标定技术,直接获取主副摄内外参,这样得到的内参误差大、标定结果不稳定。本公开的实施例改进了双目标定算法流程,在双摄立体标定之前先对主副摄像机分别进行单目标定获取每个单目摄像机的内外参和畸变参数,这样得到的内参误差小、双摄标定结果也相对更稳定。
在步骤S10中,所述标定板上包含至少四副棋盘格,所述标定板中每幅棋盘格的角度方向和位置姿态均不相同。
具体的,在步骤S10中,当所述标定板上包含四副棋盘格时,四副棋盘格分别位于所述标定板的左上角、右上角、左下角和右下角;其中,左上角的棋盘格保持水平垂直,以左上角的棋盘格为参照,右上角的棋盘格沿自身中轴向右旋转第一预设角度,左下角的棋盘格沿自身中轴向左旋转第二预设角度,右下角的棋盘格沿自身中轴向上旋转第三预设角度。
通常的双摄标定技术,往往只准备一幅棋盘格的标定板,通过多次抓拍不同角度方向、不同位置姿态的棋盘格标定板图片,然后进行单目或双目标定。而每次只抓拍一幅棋盘格进行单目或双目标定,标定效率低,且标定失败风险大。
在步骤S10中,本公开改进了棋盘格标定板,本次研究双摄标定中选择了四幅棋盘格的标定板进行标定,四幅棋盘格标定板中每幅棋盘格都有不同角度方向和不同位置姿态,具体如图4-5所示。图4为四幅棋盘格作为标定板的示意图,图5给出了四幅棋盘格作为标定板的照片。图4-5中,左上角棋盘格保持水平垂直,以左上角棋盘为参照其余三幅棋盘格放置情况分别为:右上角棋盘格沿自身中轴向右旋转30°,左下角棋盘格沿自身中轴向左旋转30°,右下角棋盘格沿自身中轴向上旋转30°,每幅棋盘格间距在1~3倍黑白方格边长。每幅棋盘格规格尺寸为:图样尺寸为14x19(有效角点为13x18);黑白方格边长为15mm。
采用四副棋盘格标定板,这样一次可抓拍四幅棋盘格进行双摄标定,这样就大大提高了标定效率并降低了标定失败的风险。
需要说明的是,本公开中是以四幅棋盘格标定板为例进行说明,在实际应用中,也可以选择六幅、八幅或更多幅的棋盘格标定板进行标定。此外,本公开采用14x19的标准黑白棋盘格进行标定,也可以采用不同形状和样式的棋盘格的标定板进行标定。本公开中的黑白方格边长为15mm,也可采用不同边长的黑白方格。且标定板上每幅棋盘格的具体旋转方向以及旋转角度也可以根据实际需求进行调整。
在步骤S30中,当所述双目摄像机的视场角不同时,将双目摄像机中大视场角的摄像机划分为副摄像机,将小视场角的摄像机划分为主摄像机;以所述副摄像机为参照采集标定板上的棋盘格图片,将所述副摄像机采集的标定板上的棋盘格图片作为副摄图片,将所述主摄像机采集的标定板上的棋盘格图片作为主摄图片。
通常的双摄标定技术,只能做同构双摄标定(同焦距同分辨率的双目标定),这样就可以利用开源的双目标定算法工具集不用重新开发双目标定核心算法。但是,通常的这种双摄标定技术不能做异构双摄标定(不同焦距不同分辨率的双目标定),大多数开源的双目标定算法工具集也不支持异构双摄标定。
在步骤S30中,本公开改进了异构双摄标定,本公开的双摄标定方案针对异构双摄标定重新开发双目标定的技术方案、算法流程和核心算法,对同构、尤其异构的双摄标定都适用。
通常的双摄标定技术,由于针对同构双摄标定,主副摄采集的棋盘格图片都保持预定的图样尺寸(以便于通常的棋盘格角点检测算法检测角点)。并且由于焦距和分辨率都相同,所以棋盘格可以在主副摄图片中都尽可能占满图片。
如图6所示,异构双摄标定,主副摄焦距和分辨率都不同,要想保持预定的图样尺寸就只能以小视场角FOV(大焦距小分辨率)的为参照采集图片(图6(a)和图6(b)示例)。但是这样采集的大FOV(小焦距大分辨率)由于棋盘格并未占满图片四周和边缘导致计算的内参不太准确(图6(b)示例)。
本公开的双摄标定方案针对异构双摄标定抛弃了保持预定的图样尺寸,而是以大FOV(小焦距大分辨率)的相机为参照采集图片,这样大FOV的相机采集的图片保持预定的图样尺寸而且棋盘格尽可能占满的图片,小FOV的相机采集的图片不能保持预定的图样尺寸但是棋盘格尽可能占满了图片(图6(c)和图6(d)示例,实际上这种方案大小FOV都可以不保持预定图样尺寸让棋盘格尽可能占满图片)。当然了,改变了传统的双摄标定图片采集方法,自然也要相应改变相关的算法—在异构双摄标定的单目标定环节采用基于生长的棋盘格角点检测算法,以保证棋盘格角点检测的准确性和效率,由于采用了基于生长的棋盘格角点检测算法,所以可以大小FOV相机都可以不保持预定图样尺寸让棋盘格尽可能占满图片地采集主副摄图片。
具体的,参考图7,步骤S50包含以下子步骤:
S51:用基于生长的棋盘格角点检测算法分别检测采集的主摄图片和副摄图片中的棋盘格和棋盘格角点。
S52:将检测到的所述主摄图像和副摄图片中的棋盘格和棋盘格角点分别进行重新排序,对应得到主摄图片的预定样式序列的棋盘格和棋盘格角点,以及副摄图片的预定样式序列的棋盘格和棋盘格角点。
S53:将主摄图片的预定样式序列的棋盘格和棋盘格角点规整对齐,然后计算主摄像机的内外参数和畸变系数,获得主摄内外参数矩阵和主摄畸变系数矩阵。
将所述副摄图片的预定样式序列的棋盘格和棋盘格角点,然后计算副摄像机的内外参数和畸变系数,获得副摄内外参数矩阵和主摄畸变系数矩阵。
具体的,子步骤S51中,基于生长的棋盘格角点检测算法主要分三个步骤:1)定位棋盘格角点位置;2)亚像素级角点和方向的精细;3)优化能量函数、生长棋盘格。具体基于生长的棋盘格角点检测算法参考论文(Geiger A,Moosmann F,Car
Figure PCTCN2021140186-appb-000001
et al.Automatic camera and range sensor calibration using a single shot[C]//Robotics and Automation(ICRA),2012 IEEE International Conference on.IEEE,2012:3936-3943)以及参考网站(http://www.cvlibs.net/software/libcbdetect/)。
需要说明的是,现有技术中一般采用OpenCV的库函数进行棋盘格角点检测,但是其不能检测不确定棋盘格图样尺寸的棋盘格角点而且检测角点效率和精度有限。而本公开实施例提供的基于生长的棋盘格角点检测算法可以检测不确定棋盘格图样尺寸的棋盘格角点检测效率较好,不过检测速度较慢、检测的棋盘格和角点不是预定序列。
具体的,子步骤S52中,对照采集图片中棋盘格和棋盘格角点从左到右、从上到下重新进行排序。因为基于生长的棋盘格角点检测方法虽然找到了每个棋盘格和棋盘格角点序列(通常是不确定序列),但是棋盘格中角点序列不是按照预定序列排列的,棋盘格也不是按照预定序列排列的,所以需要对每个棋盘格和其对应的棋盘格角点都需要进行重新排序。
具体的,子步骤S53中,本公开是基于开发双摄标定APK和算法进行主副摄相机的单目标定;其中,APK为安卓应用程序包(Android application package,APK),它是Android操作系统使用的一种应用程序包文件格式,用于分发和安装移动应用及中间件。算法是指解题方案的准确而完整的描述,是一系列解决问题的清晰指令,算法代表着用系统的方法描述解决问题的策略机制。具体开发双摄标定APK和算法如下:
1)开发双摄标定APK
通常的双摄标定技术,使用第三方的双摄标定APK和算法,开发、优化和应用受限。本公开中,双摄标定方案采用自研的双摄标定APK和算法,在开发、优化和应用中有很大的弹性(比如可以方便地切换不同模组的双摄标定,从“主摄+深度”到“主摄+广角”双摄标定的切换),可以最大程度优化并扩大应用范围。原则上讲还可以用于其他各种类型的双摄标定(比如后主+广角,后主+微距等等)。
本次研究主要用于某款智能手机后置主摄和深度两个相机模组的双摄标定。如图8所示,首先根据双摄标定需求设计双摄标定APK的用户界面(User Interface,简称UI界面)。双摄标定APK的UI界面实际效果照片如图9所示。图8-9中,整个界面默认为横屏显示,主要有四部分:
1、消息显示文本框,不按开始(START)按钮时,显示默认消息“Dual CameraCalibration”;按下START按钮时,根据后台双摄标定算法的计算结果显示返回的标定成功或者标定失败的消息。Dual Camera Calibration为双摄像头校准。
2、主副预览界面,主要用于显示主摄棋盘格标定板预览界面和副摄棋盘格标定板预览界面,在整个APK运行期间始终处于前台运行状态。
3、START拍照按钮,摁下START拍照按钮后,会抓取当前主副摄拍照数据并在后台执行双摄标定算法,直至消息显示文本框显示标定成功或者失败的消息。
4、标签文本框,在整个APK运行期间始终显示默认的文本标签,如本方案中始终显示为三行标签“DualCamCalib V1.0.1”、“ALG:1.0.1”、“Z00667AA2”。
其次根据双摄标定APK的UI界面设计,开发双摄标定APK框架和算法核心代码。整个APK主要核心代码主要分为两部分:java层和cpp层。其中双摄标定APK框架相关的核心代码主要在java层,在java层主要实现:主副预览界面调用并实时显示主副摄预览画面;按下START按钮后抓拍主副摄拍照数据,将拍照数据保存为图片并调用cpp层双摄标定算法;cpp层双摄标定算法执行结束后,将标定结果显示到消息文本框中。java层和cpp层之间的数据交互,根据bundle机制开发了通过键值方式传递和获取数据的jni(java native interface)接口。
2)开发双摄标定算法
双摄标定算法实现的核心代码主要在cpp层,这部分主要说明针孔相机模型和图片矫正的技术,双摄标定核心算法其他相关技术在后续双摄标定流程介绍中具体展开。
a.针孔相机模型
智能手机拍摄的过程实际上是一个光学成像的过程,其中针孔相机模型是相机成像采用最多的模型。如图10-11所示,针孔相机模型中相机的成像过程涉及到四个坐标系:世界坐标系、相机坐标系、图像坐标系、像素坐标系以及这四个坐标系的转换。
世界坐标系(x w,y w,z w):目标物体位置的参考系。除了无穷远,世界坐标可以根据运算方便与否自由放置,单位为长度单位如mm。在双目视觉中世界坐标系主 要有三个用途:标定时确定标定物的位置;作为双目视觉的系统参考系,给出两个摄像机相对世界坐标系的关系,从而求出相机之间的相对关系;作为重建得到三维坐标的容器,存放重建后的物体的三维坐标。世界坐标系是将看见中物体纳入运算的第一站。
相机坐标系(X c,Y c,Z c):相机站在自己角度上衡量的物体的坐标系。相机坐标系的原点在摄像机的光心上,z轴与摄像机光轴平行。它是与拍摄物体发生联系的桥头堡,世界坐标系下的物体需先经历刚体变化转到相机坐标系,然后再和图像坐标系发生关系。它是图像坐标与世界坐标之间发生关系的纽带,单位为长度单位如mm。
图像坐标系(x,y):以CMOS图像平面的中心为坐标原点,为了描述成像过程中物体从相机坐标系到图像坐标系的投影透射关系而引入,方便进一步得到像素坐标系下的坐标。图像坐标系是用物理单位(例如毫米)表示像素在图像中的位置。
像素坐标系(u,v):以CMOS图像平面的左上角顶点为原点,为了描述物体成像后的像点在数字图像上(相片)的坐标而引入,是真正从相机内读取到的信息所在的坐标系。像素坐标系就是以像素为单位的图像坐标系。
针孔相机模型四大坐标系变换关系如式(1)所示:
Figure PCTCN2021140186-appb-000002
令内参矩阵
Figure PCTCN2021140186-appb-000003
外参矩阵
Figure PCTCN2021140186-appb-000004
则内外参矩阵P为:
Figure PCTCN2021140186-appb-000005
则将式(1)进一步化简为:
Figure PCTCN2021140186-appb-000006
其中,式(1)-(3)中,(x w,y w,z w)为世界坐标系中某点p的坐标;(u,v)为点p对应的像素坐标系中的坐标;Z c(u,v,1)为点p对应的 相机坐标系中的坐标;(u 0,v 0)为相机的光学中心(主点,principal point);f为相机的焦距,以mm为单位;(k u,k v)代表(u,v)方向上每毫米的像素量;R为旋转矩阵,
Figure PCTCN2021140186-appb-000007
r 00~r 22为旋转矩阵R中的元素;T为平移矩阵,
Figure PCTCN2021140186-appb-000008
T x,T y,T z分别为x、y、z方向的平移量。
从世界坐标系到像素坐标系(不考虑畸变)的转换关系,如上述公式(3)所示。其中,P表示一个投影矩阵,由式(2)可知,它由内参矩阵M和外参矩阵
Figure PCTCN2021140186-appb-000009
相乘而得到。
b.图片矫正
我们在相机坐标系到图像坐标系变换时谈到透视变换。相机拍照时通过透镜把实物投影到像平面上,但是透镜由于制造精度以及组装工艺的偏差会引入畸变,导致原始图像的失真。因此我们需要考虑成像畸变的问题。透镜的畸变主要分为径向畸变和切向畸变,还有薄透镜畸变等等,但都没有径向和切向畸变影响显著,所以我们在这里只考虑径向和切向畸变。
(a)径向畸变
顾名思义,径向畸变就是沿着透镜半径方向分布的畸变,产生原因是光线在远离透镜中心的地方比靠近中心的地方更加弯曲,这种畸变在普通廉价的镜头中表现更加明显,径向畸变主要包括桶形畸变和枕形畸变两种。图12分别是无畸变、桶形畸变和枕形畸变示意图。
像平面中心的畸变为0,沿着镜头半径方向向边缘移动,畸变越来越严重。畸变的数学模型可以用主点(principle point)周围的泰勒级数展开式的前几项进行描述,通常使用前两项,即k 1和k 2,对于畸变很大的镜头,如鱼眼镜头,可以增加使用第三项k 3、第四项k 4来进行描述,成像仪上某点根据其在径向方向上的分布位置,调节公式(4)-(5)为:
x 0=x(1+k 1r 2+k 2r 4+k 3r 6+k 4r 8)     式(4)
y 0=y(1+k 1r 2+k 2r 4+k 3r 6+k 4r 8)    式(5)
其中,(x,y)为径向或切向矫正前的图像坐标系坐标,(x 0,y 0)为径向或切向矫正后的图像坐标系坐标;半径
Figure PCTCN2021140186-appb-000010
k 1~k 4为径向畸变参数。
图13是距离光心不同距离上的点经过透镜径向畸变后点位的偏移示意图,可以看到,距离光心越远,径向位移越大,表示畸变也越大,在光心附近,几乎没有偏移。
(b)切向畸变
切向畸变是由于透镜本身与相机传感器平面(像平面)或图像平面不平行而产生的,这种情况多是由于透镜被粘贴到镜头模组上的安装偏差导致。畸变模型可以用两个额外的参数p 1和p 2来描述,如式(6)-(7):
x 0=x+[2p 1y+p 2(r 2+2x 2)]     式(6)
y 0=y+[2p 2x+p 1(r 2+2y 2)]       式(7)
其中,p 1,p 2为切向畸变参数。
图14显示某个透镜的切向畸变示意图,大体上畸变位移相对于左下—右上角的连线是对称的,说明该镜头在垂直于该方向上有一个旋转角度。
c.主副摄像机的单目标定
图15为本公开提供的双摄模型,如图15所示(图中Translation为平移),基于前面提到的针孔相机模型,对于左右相机分别有:
Figure PCTCN2021140186-appb-000011
Figure PCTCN2021140186-appb-000012
其中,(u 1,v 1)为点p对应的左像素坐标系中的坐标,(u 2,v 2)为点p对应的右像素坐标系中的坐标;Z c1(u 1,v 1,1)为点p对应的左相机坐标系中的坐标,Z c2(u 2,v 2,1)为点p对应的右相机坐标系中的坐标;
Figure PCTCN2021140186-appb-000013
为左相机投影矩阵;
Figure PCTCN2021140186-appb-000014
为右相机投影矩阵。
结合前面提到的图片畸变矫正技术,就可以得到主副摄的内外参数和畸变参数。
进一步的,在子步骤S53之后,还包括子步骤S54:
根据子步骤S53计算得到的主摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角点,然后根据实际三维角点坐标和重投影角点坐标计算主摄像机单目标定误差。
根据子步骤S53计算得到的副摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角点,然后根据实际三维角点坐标和重投影角点坐标计算副摄像机单目标定误差。
具体的,计算主副摄像机单目标标定误差有两个作用:一是作为标定是否准确的一个计量标准;二是可以作为调试过程中标定准确性的一个参考标准。
具体的,在步骤S70中,主副摄立体标定需要标定的参数:主副摄内参数矩阵、主副摄畸变系数矩阵、旋转矩阵R、平移矩阵T、本征矩阵E以及基础 矩阵F(主副摄内参数矩阵和主副摄畸变系数矩阵已经在前面的单目标定中得到)。
双目摄像机标定和单目摄像机标定最主要的区别就是:双目摄像机还需要标定出左右摄像机坐标系之间的相对关系,即需要获取旋转矩阵R、平移矩阵T、本征矩阵E以及基础矩阵F,具体如下:
1)旋转矩阵R和平移矩阵T
如图15所示,用旋转矩阵R和平移矩阵T来描述左右两个摄像机坐标系的相对关系,具体为:在左相机上建立世界坐标系。
假设空间中有一点P,其在世界坐标系下的坐标为P w(x w,y w,z w),其在左右摄像机坐标系下的坐标可以表示为:
其中P l和P r又有如下的关系:
Figure PCTCN2021140186-appb-000015
综合上式,可以推得:
Figure PCTCN2021140186-appb-000016
Figure PCTCN2021140186-appb-000017
其中,P l为点P在左摄像机坐标系下的坐标,P r为点P在右摄像机坐标系下的坐标;R l、T l分别为左摄像头经过单目标定得到的相对标定物的旋转矩阵和平移矩阵;R r、T r为右摄像头经过单目标定得到的相对标定物的旋转矩阵和平移矩阵;上标“-1”表示矩阵的逆;上标“T”表示矩阵的转置。
左右相机分别进行单目标定,就可以分别R l、T l、R r、T r。带入上式(9)就可以求出左右相机之间的旋转矩阵R和平移矩阵T。
注意以上推导中用到
Figure PCTCN2021140186-appb-000018
这是因为旋转矩阵R、R l、R r均是单位正交矩阵,正交矩阵的逆(inverse)等于正交矩阵的转置(transpose)。单目摄像机需要标定的参数,双目都需要标定,双目摄像机比单目摄像机多标定的参数:旋转矩阵R和平移矩阵T,主要是描述两个摄像机相对位置关系的参数,这些参数在立体校正和对极几何中用处很大。
2)本征矩阵E和基础矩阵F
a.对极几何
求解本征矩阵E和基础矩阵F就不得先说说对极几何。
如图16所示,连接投影中心的直线与投影平面(projective plane)的交点称为极点(epipole),连接投影点和极点的虚线称为极线(epipolar line)。两条极线显然是共面的,两条极线所在平面称为极面(epipolar plane)。
根据几何公理演绎推导(此处显而易见,故推导省略),我们得出以下结论:
相机视图中的每一个三维点(图中的点P)都被包含在与每个图像相交的极面中。相机视图和极面两者相交产生的直线是极线。
给定一幅图像上的特征点,在另一个图像中相匹配的点一定位于对应的极线上,这个约束被称为“极线约束”。
极线约束意味着,一旦我们知道了立体实验设备之间的对极几何,在两幅图像之间匹配特征的二维搜索可以转变为沿着极线的一维搜索(在应用三角测量原理的时候)。这不仅会极大地节省计算成本,同时会帮助我们排除许多可能会产生虚假匹配的点。
b.求解本征矩阵E和基础矩阵F
如图17所示,本征矩阵E包含关于物理空间中两个摄像机的平移(Translation)和旋转(Rotation)的信息。基础矩阵F除了包含相同的信息外,基础矩阵F还包含两个摄像机的内参数,可以在像素坐标系上将两台摄像机关联。
下面讨论一下本征矩阵和基础矩阵的公式推导。具体的思路是用已知的极面把所有与本征矩阵E和基础矩阵F相关的都关联起来。
根据图16,以左相机坐标系为参考系,左右投影平面上对应的两点
Figure PCTCN2021140186-appb-000019
和平移向量
Figure PCTCN2021140186-appb-000020
旋转矩阵R有:
Figure PCTCN2021140186-appb-000021
Figure PCTCN2021140186-appb-000022
其中,
Figure PCTCN2021140186-appb-000023
分别为空间某点P在左右相机投影平面上的左右对应投影点坐标的向量形式,R为左投影点到对应的右投影点的旋转矩阵,T为左投影点到对应的右投影点的平移矩阵;矩阵右上角“-1”表示对应矩阵的逆矩阵,矩阵右上角“T”表示对应矩阵的转置矩阵。
设某极面对应的法向量
Figure PCTCN2021140186-appb-000024
为极面上的任意一点对应的向量,
Figure PCTCN2021140186-appb-000025
为极面上的固定点对应的向量,则根据法向量垂直于相应平面有:
Figure PCTCN2021140186-appb-000026
可以用叉乘的方式构造法向量(叉乘得到一个垂直的向量),可将上式(14)改写如下:
Figure PCTCN2021140186-appb-000027
根据式(13)可将式(15)变换一下:
Figure PCTCN2021140186-appb-000028
根据线代叉乘转换成矩阵乘法知识,将叉乘写成一个矩阵乘法的形式:
Figure PCTCN2021140186-appb-000029
其中,S为平移矩阵T的变换形式。
将这个式(17)条件回代式(16),得到一个重要结论:
Figure PCTCN2021140186-appb-000030
点乘R·S即为本征矩阵E的定义(E=R·S)的最终结论:
Figure PCTCN2021140186-appb-000031
实际使用中,需要的是投影平面上的观察到的点,可以利用投影公式:
Figure PCTCN2021140186-appb-000032
Figure PCTCN2021140186-appb-000033
则最终结论式(20)的可以变成实际使用的结论:
Figure PCTCN2021140186-appb-000034
其中,
Figure PCTCN2021140186-appb-000035
分别为左右图像坐标系对应投影点坐标的向量形式,f l、f r为左右相机焦距,z l、z r分别为左右相机坐标系对应投影点z分量的值。
看似可以通过本征矩阵E来将左边的成像点映射到另一边,但本征矩阵E是一个秩亏矩阵(一个秩为2的3x3矩阵),所以只能将一个点映射到一条直线上。
本征矩阵E包含两个摄像机相关的所有几何信息,但不包括相机的内部参数。上面推导中的向量
Figure PCTCN2021140186-appb-000036
只是几何意义上的点,通过下式跟像素点关联起来:
Figure PCTCN2021140186-appb-000037
其中,
Figure PCTCN2021140186-appb-000038
为图像坐标系坐标的向量形式,
Figure PCTCN2021140186-appb-000039
Figure PCTCN2021140186-appb-000040
对应的像素坐标系坐标的向量形式,M为内参矩阵。
将式(22)对应的主副摄公式代入(21),可以得到:
Figure PCTCN2021140186-appb-000041
式(23)中间部分即为基础矩阵F:
Figure PCTCN2021140186-appb-000042
实际使用中包含基础矩阵F的公式为:
Figure PCTCN2021140186-appb-000043
其中,
Figure PCTCN2021140186-appb-000044
分别为左右像素坐标系对应投影点坐标的向量形式;M l、M r分别为左右相机的内参矩阵。
基础矩阵F和本征矩阵E差别在于,基础矩阵F在图像像素坐标中操作,本征矩阵E在物理坐标系中操作。类似于本征矩阵E,基础矩阵F也是一个秩为2的3x3矩阵。
具体的,参考图18,在步骤S70之后还包括步骤S90,采用Bouguet算法对主副摄像机进行立体校正。
前面通过单目标定和立体标定获得的主副摄内参矩阵、主副摄畸变矩阵、外参矩阵、本征矩阵和基础矩阵,就是为了确定空间物体表面某点的在三维空间中的几何位置与其在图像中对应点之间的相互关系。这个环节就是在之前的基础上,通过立体矫正得到一个近似理想的立体装置模型,再通过三角测量来计算视差和深度信息。下面就来说说三角测量和立体矫正。
1)三角测量
如图19为一个理想的双摄立体装置,图像没有畸变,图像平面共面,光轴平行,焦距相同,主点已经被校准到该有的位置。此外,假设摄像机前向平行排列,即每一个像素行和另一个摄像机中的像素行精确地对齐,并能够在物理世界中找到一点P,在左右图像中分别有成像点。
那么,通过相似三角形的相似性就可以得到深度Z:
Figure PCTCN2021140186-appb-000045
其中,x l、x r为空间某点p分别在左右图像坐标系中对应投影点的横坐标,d=x l-x r为视差,B为基线(左右图像坐标系原点距离),f为焦距,Z为深度。
2)立体校正
如图15所示,实际双摄装置不存在理想的前线平行对准,所以立体矫正的目的就是:将两个摄像机的像平面重新映射,使两者位于完全相同的平面上,图像行完全对准到前向平行对准。这样就得到一个近似理想的立体装置(如图20所示),以便用三角测量计算视差和深度信息,其中,Principal Ray为主射线。
a.Bouguet算法
常见的立体矫正算法有两种:(1)Hartley算法,只使用基础矩阵产生非标定立体视觉;(2)Bouguet算法,使用两个标定摄像机中的旋转和平移参数。一般使用Bouguet算法,这里只讨论这个Bouguet算法。
由前述立体标定可知旋转矩阵R和平移矩阵T与各自单目标定的旋转矩阵R l、R r和平移矩阵T l、T r、关系如下:
Figure PCTCN2021140186-appb-000046
T=T r-RT l       式(28)
Bouguet算法的具体步骤如下:
S901:将立体标定的右图像平面相对于左图像平面的旋转矩阵R切成两半,左图像旋转一半r l,右图像旋转一半r r;这样重投影畸变会比较小,左右视图的共同面积最大。其中
Figure PCTCN2021140186-appb-000047
Figure PCTCN2021140186-appb-000048
Figure PCTCN2021140186-appb-000049
的逆矩阵,
Figure PCTCN2021140186-appb-000050
Figure PCTCN2021140186-appb-000051
叫做左右相机的合成旋转矩阵。
S902:此时左右相机的成像面达到平行,但是极线没有平行对准。为了使极线平行对准,我们构造单位正交变换矩阵R rect,将左相机极点变换到无穷远并使极线平行对准。
设一个可以将极点变换到无穷远的构造单位正交变换矩阵R rect如下:
Figure PCTCN2021140186-appb-000052
其中,
Figure PCTCN2021140186-appb-000053
为构造单位正交变换矩阵R rect的三组单位向量。
左右相机的投影中心之间的平移向量就是左极点方向,则构造
Figure PCTCN2021140186-appb-000054
为:
Figure PCTCN2021140186-appb-000055
矢量
Figure PCTCN2021140186-appb-000056
、应该和
Figure PCTCN2021140186-appb-000057
正交,选择沿着图像平面且正交于光轴的方向比较好:
Figure PCTCN2021140186-appb-000058
矢量
Figure PCTCN2021140186-appb-000059
用叉乘构造就可以得到:
Figure PCTCN2021140186-appb-000060
S903:单位正交变换矩阵R rect构造完毕,最后使用下面这个矩阵对图像平面进行变换就可以实现行对准了。立体矫正最终得到的旋转矩阵如下:
R l′=R rect·r l式(32)
R r′=R rect·r r式(33)
其中,r l、r r为左右相机的合成旋转矩阵,R l′、R r′分别为立体矫正后的左右相机最终的旋转矩阵。
b.重投影矩阵
双摄立体矫正除了计算最终的旋转矩阵R l′和R r′外,在三维重建等应用中还会计算主副摄的投影矩阵,主副摄有效矩形区域,重投影矩阵等等。这里只对重投影矩阵的计算进行简单推导。
重投影矩阵可以将图像平面中的二维点映射回物理世界中的三维坐标。在图17中,根据三角测量和三角形相似定理可得:
Figure PCTCN2021140186-appb-000061
其中,(X,Y,Z)为世界坐标系中的坐标,(x,y)为图像坐标系中的坐标,(c x,c y)为相机的光学中心(主点,principal point)。
Figure PCTCN2021140186-appb-000062
则有:
Figure PCTCN2021140186-appb-000063
其中,(X',Y',Z')为未归一化的重投影坐标,W为变换过程中的归一化参数,W'为最终的归一化参数;c′ x为未归一化的重投影坐标系中光心在x方向上的坐标;重投影矩阵Q定义为:
Figure PCTCN2021140186-appb-000064
最后可得归一化的重投影三维坐标(X'/W',Y'/W',Z'/W')。
应该理解的是,虽然图3、7、18的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图3、7、18中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
如图21所示,作为另一方面,本公开提供了一种双目摄像机的标定系统100,包括:
标定板设计模块110,配置成设计以棋盘格为标定板;
图像获取模块120,配置成将双目摄像机划分为主摄像机和副摄像机,采用所述主摄像机和副摄像机分别采集标定板上的棋盘格图片,分别对应得到主摄图片和副摄图片;
单目标定模块130,配置成根据主摄图片进行主摄像机单目标定,获得主摄像机内外参数矩阵和主摄像机畸变系数矩阵;并配置成根据所述副摄图片进行副摄像机单目标定,获得副摄像机内外参数矩阵和副摄像机畸变系数矩阵;
立体标定模块140,配置成进行所述主摄像机和所述副摄机的立体标定,获得旋转矩阵R、平移矩阵T、本征矩阵E以及基础矩阵F。
本公开的实施例中,通过在立体标定模块之前设置单目标定模块,在双摄立体标定之前先对主副摄像机分别进行单目标定获取每个单目摄像机的内外参和畸变参数,这样得到的内参误差小、双摄标定结果也相对更稳定。
具体的,所述标定板设计模块110配置成设计的标定板上包含至少四副棋盘格,所述标定板中每幅棋盘格的角度方向和位置姿态均不相同。通过标定板设计模块110设计的标定板,一次可抓拍四幅以上的棋盘格进行双摄标定,提高了标定效率并降低了标定失败的风险。
具体的,图像获取模块120配置成,当所述双目摄像机的视场角不同时,将双目摄像机中大视场角的摄像机划分为副摄像机,将小视场角的摄像机划分为主摄像机;以所述副摄像机为参照采集标定板上的棋盘格图片,将所述副摄像机采集的标定板上的棋盘格图片作为副摄图片,将所述主摄像机采集的标定板上的棋盘格图片作为主摄图片。通过图像获取模块可以使大FOV的摄像机采集的图片保持预定的图样尺寸而且棋盘格尽可能占满的图片,小FOV的摄像机采集的图片不能保持预定的图样尺寸但是棋盘格尽可能占满了图片。
具体的,参考图22,所述单目标定模块130包括:
采集单元131,配置成用基于生长的棋盘格角点检测算法分别检测采集的 主摄图片和副摄图片中的棋盘格和棋盘格角点。
重新排序单元132,配置成将检测到的所述主摄图像和副摄图片中的棋盘格和棋盘格角点分别进行重新排序,对应得到主摄图片的预定样式序列的棋盘格和棋盘格角点,以及副摄图片的预定样式序列的棋盘格和棋盘格角点。
双摄标定算法单元133,配置成将所述主摄图片的预定样式序列的棋盘格和棋盘格角点规整对齐,然后计算主摄像机的内外参数和畸变系数,获得主摄内外参数矩阵和主摄畸变系数矩阵。
所述双摄标定算法单元133,还配置成将所述副摄图片的预定样式序列的棋盘格和棋盘格角点,然后计算副摄像机的内外参数和畸变系数,获得副摄内外参数矩阵和主摄畸变系数矩阵。
进一步的,所述单目标定模块还包括单目标定误差计算单元,所述单目标定误差计算单元配置成根据主摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角点,然后根据实际三维角点坐标和重投影角点坐标计算主摄像机单目标定误差。
所述单目标定误差计算单元配置成根据副摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角点,然后根据实际三维角点坐标和重投影角点坐标计算副摄像机单目标定误差。
具体的,参考图23,所述双目摄像机的标定系统还包括立体校正模块150,所述立体校正模块配置成采用Bouguet算法对主副摄像机进行立体校正。具体的Bouguet算法参见上述双目摄像机标定方法中的描述。
上述双目摄像机的标定系统中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的一个或多个处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于一个或多个处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种电子设备,该电子设备可以是终端,其内部结构图可以如图24所示。该电子设备包括通过系统总线连接的处理器、存储器、通信接口、显示屏和输入装置。其中,该电子设备的处理器用于提供计算和控制能力。该电子设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该电子设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、近场通信(NFC)或其他技术实现。该计算机可读指令被处理器执行时以实现一种双目摄像机的标定方法。该电子设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该电子设备的输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图24中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本公开提供的双目摄像机的标定系统可以实现为一种计算机可读指令的形式,计算机可读指令可在如图24所示的计算机设备上运行。计算机设备的存储器中可存储组成该双目摄像机的标定系统的各个程序模块,比如,图21所示的标定板设计模块、图像获取模块、单目标定模块和立体标定 模块。各个程序模块构成的计算机可读指令使得一个或多个处理器执行本说明书中描述的本公开各个实施例的双目摄像机的标定方法中的步骤。
例如,图24所示的计算机设备可以通过如图21所示的双目摄像机的标定系统中的标定板设计模块执行步骤S10。计算机设备可通过图像获取模块执行步骤S30。计算机设备可通过单目标定模块执行步骤S50。计算机设备可通过立体标定模块执行步骤S70。
在一个实施例中,还提供了一种电子设备,包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,该一个或多个处理器执行计算机可读指令时实现上述各方法实施例中的步骤。
在一个实施例中,还提供了一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,比如静态随机存取存储器(Static Random Access Memory,SRAM)和动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。
工业实用性
本公开在双摄立体标定之前先对主副摄像机分别进行单目标定获取每个单目摄像机的内外参和畸变参数,得到的内参误差小、双摄标定结果也相对更稳定。且改进了异构双摄标定,使得本公开提供的方法对同构、尤其异构的双摄标定都适用,具有很强的工业实用性。

Claims (20)

  1. 双目摄像机的标定方法,其特征在于,包括以下步骤:
    S10:以棋盘格为标定板;
    S30:将双目摄像机划分为主摄像机和副摄像机,采用所述主摄像机和副摄像机分别采集标定板上的棋盘格图片,分别对应得到主摄图片和副摄图片;
    S50:根据所述主摄图片进行主摄像机单目标定,获得主摄像机内外参数矩阵和主摄像机畸变系数矩阵;
    根据所述副摄图片进行副摄像机单目标定,获得副摄像机内外参数矩阵和副摄像机畸变系数矩阵;
    S70:再分别进行所述主摄像机和所述副摄像机的立体标定,获得旋转矩阵R、平移矩阵T、本征矩阵E以及基础矩阵F。
  2. 根据权利要求1所述的双目摄像机的标定方法,其中,步骤S10中,所述标定板上包含至少四副棋盘格,所述标定板中每幅棋盘格的角度方向和位置姿态均不相同。
  3. 根据权利要求1所述的双目摄像机的标定方法,其中,步骤S30中,当所述双目摄像机的视场角不同时,将双目摄像机中大视场角的摄像机划分为副摄像机,将小视场角的摄像机划分为主摄像机;
    以所述副摄像机为参照采集标定板上的棋盘格图片,将所述副摄像机采集的标定板上的棋盘格图片作为副摄图片,将所述主摄像机采集的标定板上的棋盘格图片作为主摄图片。
  4. 根据权利要求1所述的双目摄像机的标定方法,其中,步骤S50包含以下子步骤:
    S51:用基于生长的棋盘格角点检测算法分别检测采集的主摄图片和副摄图片中的棋盘格和棋盘格角点;
    S52:将检测到的所述主摄图片和副摄图片中的棋盘格和棋盘格角点分别进行重新排序,对应得到主摄图片的预定样式序列的棋盘格和棋盘格角点,以及副摄图片的预定样式序列的棋盘格和棋盘格角点;
    S53:将所述主摄图片的预定样式序列的棋盘格和棋盘格角点规整对齐,然后计算主摄像机的内外参数和畸变系数,获得主摄内外参数矩阵和主摄畸变系数矩阵;
    将所述副摄图片的预定样式序列的棋盘格和棋盘格角点规整对齐,然后计算副摄像机的内外参数和畸变系数,获得副摄内外参数矩阵和主摄畸变系数矩阵。
  5. 根据权利要求4所述的双目摄像机的标定方法,其中,子步骤S53中,采用双摄标定算法分别计算所述主摄像机的内外参数和畸变系数,以及计算所述副摄像机的内外参数和畸变系数;所述畸变系数包含径向畸变系数和切向畸变系数。
  6. 根据权利要求1所述的双目摄像机的标定方法,其特征在于,在步骤S70之后还包括步骤S90,采用Bouguet算法对主副摄像机进行立体校正。
  7. 根据权利要求4所述的双目摄像机的标定方法,其中,在步骤S53之后,还包括:
    S54:根据所述主摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角 点,根据实际三维角点坐标和重投影角点坐标计算主摄像机单目标定误差;
    根据所述副摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角点,根据实际三维角点坐标和重投影角点坐标计算副摄像机单目标定误差。
  8. 根据权利要求4所述的双目摄像机的标定方法,其中,所述将检测到的所述主摄图片和副摄图片中的棋盘格和棋盘格角点分别进行重新排序,包括:
    对检测到的所述主摄图片和副摄图片中的棋盘格和棋盘格角点从左到右、从上到下重新进行排序。
  9. 根据权利要求2所述的双目摄像机的标定方法,其中,所述标定板中每幅棋盘格的角度方向和位置姿态均不相同,包括:
    在所述标定板中包含四副棋盘格的情况下,所述四副棋盘格分别位于所述标定板的左上角、右上角、左下角和右下角;
    其中,所述左上角的棋盘格保持水平垂直,以所述左上角的棋盘格为参照,所述右上角的棋盘格沿自身中轴向右旋转第一预设角度,所述左下角的棋盘格沿自身中轴向左旋转第二预设角度,所述右下角的棋盘格沿自身中轴向上旋转第三预设角度。
  10. 双目摄像机的标定系统,其特征在于,包括:
    标定板设计模块,配置成设计以棋盘格为标定板;
    图像获取模块,配置成将双目摄像机划分为主摄像机和副摄像机,采用所述主摄像机和副摄像机分别采集标定板上的棋盘格图片,分别对应得到主摄图片和副摄图片;
    单目标定模块,配置成根据所述主摄图片进行主摄像机单目标定,获得主摄像机内外参数矩阵和主摄像机畸变系数矩阵;并配置成根据所述副摄图片进行副摄像机单目标定,获得副摄像机内外参数矩阵和副摄像机畸变系数矩阵;
    立体标定模块,配置成进行所述主摄像机和所述副摄像机的立体标定,获得旋转矩阵R、平移矩阵T、本征矩阵E以及基础矩阵F。
  11. 根据权利要求10所述的双目摄像机的标定系统,其中,所述标定板设计模块,配置成设计的标定板上包含至少四副棋盘格,所述标定板中每幅棋盘格的角度方向和位置姿态均不相同。
  12. 根据权利要求10所述的双目摄像机的标定系统,其中,所述图像获取模块配置成,当所述双目摄像机的视场角不同时,将双目摄像机中大视场角的摄像机划分为副摄像机,将小视场角的摄像机划分为主摄像机;以所述副摄像机为参照采集标定板上的棋盘格图片,将所述副摄像机采集的标定板上的棋盘格图片作为副摄图片,将所述主摄像机采集的标定板上的棋盘格图片作为主摄图片。
  13. 根据权利要求10所述的双目摄像机的标定系统,其中,所述单目标定模块包括:
    采集单元,配置成基于生长的棋盘格角点检测算法分别检测采集的主摄图片和副摄图片中的棋盘格和棋盘格角点;
    重新排序单元,配置成将检测到的所述主摄图片和副摄图片中的棋盘格和棋盘格角点分别进行重新排序,对应得到主摄图片的预定样式序列的棋盘格和棋盘格角点,以及副摄图片的预定样式序列的棋盘格和棋盘格角点;
    双摄标定算法单元,配置成将所述主摄图片的预定样式序列的棋盘格和棋盘 格角点规整对齐,然后计算主摄像机的内外参数和畸变系数,获得主摄内外参数矩阵和主摄畸变系数矩阵;
    所述双摄标定算法单元,还配置成将所述副摄图片的预定样式序列的棋盘格和棋盘格角点,然后计算副摄像机的内外参数和畸变系数,获得副摄内外参数矩阵和主摄畸变系数矩阵。
  14. 根据权利要求13所述的双目摄像机的标定系统,其中,所述双摄标定算法单元配置成分别计算所述主摄像机的内外参数和畸变系数,以及计算所述副摄像机的内外参数和畸变系数;所述畸变系数包含径向畸变系数和切向畸变系数。
  15. 根据权利要求10所述的双目摄像机的标定系统,其中,还包括立体校正模块,所述立体校正模块配置成采用Bouguet算法对主副摄像机进行立体校正。
  16. 根据权利要求13所述的双目摄像机的标定系统,其中,所述单目标定模块还包括单目标定误差计算单元,所述单目标定误差计算单元配置成根据主摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角点,根据实际三维角点坐标和重投影角点坐标计算主摄像机单目标定误差;所述单目标定误差计算单元配置成根据副摄像机的内外参数和畸变系数,将计算棋盘格角点重投影角点,根据实际三维角点坐标和重投影角点坐标计算副摄像机单目标定误差。
  17. 根据权利要求13所述的双目摄像机的标定系统,其中,所述重新排序单元,具体配置成对检测到的所述主摄图片和副摄图片中的棋盘格和棋盘格角点从左到右、从上到下重新进行排序。
  18. 根据权利要求11所述的双目摄像机的标定系统,其中,在所述标定板设计模块,配置成设计的标定板上包含四副棋盘格的情况下,所述四副棋盘格分别位于所述标定板的左上角、右上角、左下角和右下角;其中,所述左上角的棋盘格保持水平垂直,以所述左上角的棋盘格为参照,所述右上角的棋盘格沿自身中轴向右旋转第一预设角度,所述左下角的棋盘格沿自身中轴向左旋转第二预设角度,所述右下角的棋盘格沿自身中轴向上旋转第三预设角度。
  19. 一种电子设备,包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,其特征在于,所述一个或多个处理器执行所述计算机可读指令时实现权利要求1-9中任一项所述的双目摄像机的标定方法的步骤。
  20. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时实现权利要求1-9中任一项所述的双目摄像机的标定方法的步骤。
PCT/CN2021/140186 2021-09-24 2021-12-21 双目摄像机的标定方法、系统、电子设备和存储介质 WO2023045147A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111122839.9A CN113808220A (zh) 2021-09-24 2021-09-24 双目摄像机的标定方法、系统、电子设备和存储介质
CN202111122839.9 2021-09-24

Publications (1)

Publication Number Publication Date
WO2023045147A1 true WO2023045147A1 (zh) 2023-03-30

Family

ID=78940401

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140186 WO2023045147A1 (zh) 2021-09-24 2021-12-21 双目摄像机的标定方法、系统、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN113808220A (zh)
WO (1) WO2023045147A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721339A (zh) * 2023-04-24 2023-09-08 广东电网有限责任公司 一种输电线路的检测方法、装置、设备和存储介质
CN116862999A (zh) * 2023-09-04 2023-10-10 华东交通大学 一种双摄像机三维测量的标定方法、系统、设备和介质
CN117152274A (zh) * 2023-11-01 2023-12-01 三一重型装备有限公司 掘进机双目摄像头的位姿校正方法及系统、可读存储介质
CN117190875A (zh) * 2023-09-08 2023-12-08 重庆交通大学 一种基于计算机智能视觉的桥塔位移测量装置及方法
CN118032264A (zh) * 2024-04-09 2024-05-14 中国航空工业集团公司沈阳空气动力研究所 一种适用于高速风洞高温高速下的变形测量方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808220A (zh) * 2021-09-24 2021-12-17 上海闻泰电子科技有限公司 双目摄像机的标定方法、系统、电子设备和存储介质
CN115830118B (zh) * 2022-12-08 2024-03-19 重庆市信息通信咨询设计院有限公司 一种基于双目相机的水泥电杆的裂纹检测方法和系统
CN116030145A (zh) * 2023-03-23 2023-04-28 北京中科慧眼科技有限公司 一种用于不同焦距双目镜头的立体匹配方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120285024A1 (en) * 2011-05-11 2012-11-15 U.S.A. As Represented By The Administrator Of The National Aeronautics And Space Administration Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
CN108053450A (zh) * 2018-01-22 2018-05-18 浙江大学 一种基于多约束的高精度双目相机标定方法
CN110969668A (zh) * 2019-11-22 2020-04-07 大连理工大学 一种长焦双目相机的立体标定算法
CN111383194A (zh) * 2020-03-10 2020-07-07 江苏科技大学 一种基于极坐标的相机畸变图像校正方法
CN112634374A (zh) * 2020-12-18 2021-04-09 杭州海康威视数字技术股份有限公司 双目相机的立体标定方法、装置、系统及双目相机
CN113808220A (zh) * 2021-09-24 2021-12-17 上海闻泰电子科技有限公司 双目摄像机的标定方法、系统、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120285024A1 (en) * 2011-05-11 2012-11-15 U.S.A. As Represented By The Administrator Of The National Aeronautics And Space Administration Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
CN108053450A (zh) * 2018-01-22 2018-05-18 浙江大学 一种基于多约束的高精度双目相机标定方法
CN110969668A (zh) * 2019-11-22 2020-04-07 大连理工大学 一种长焦双目相机的立体标定算法
CN111383194A (zh) * 2020-03-10 2020-07-07 江苏科技大学 一种基于极坐标的相机畸变图像校正方法
CN112634374A (zh) * 2020-12-18 2021-04-09 杭州海康威视数字技术股份有限公司 双目相机的立体标定方法、装置、系统及双目相机
CN113808220A (zh) * 2021-09-24 2021-12-17 上海闻泰电子科技有限公司 双目摄像机的标定方法、系统、电子设备和存储介质

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721339A (zh) * 2023-04-24 2023-09-08 广东电网有限责任公司 一种输电线路的检测方法、装置、设备和存储介质
CN116721339B (zh) * 2023-04-24 2024-04-30 广东电网有限责任公司 一种输电线路的检测方法、装置、设备和存储介质
CN116862999A (zh) * 2023-09-04 2023-10-10 华东交通大学 一种双摄像机三维测量的标定方法、系统、设备和介质
CN116862999B (zh) * 2023-09-04 2023-12-08 华东交通大学 一种双摄像机三维测量的标定方法、系统、设备和介质
CN117190875A (zh) * 2023-09-08 2023-12-08 重庆交通大学 一种基于计算机智能视觉的桥塔位移测量装置及方法
CN117152274A (zh) * 2023-11-01 2023-12-01 三一重型装备有限公司 掘进机双目摄像头的位姿校正方法及系统、可读存储介质
CN117152274B (zh) * 2023-11-01 2024-02-09 三一重型装备有限公司 掘进机双目摄像头的位姿校正方法及系统、可读存储介质
CN118032264A (zh) * 2024-04-09 2024-05-14 中国航空工业集团公司沈阳空气动力研究所 一种适用于高速风洞高温高速下的变形测量方法及装置

Also Published As

Publication number Publication date
CN113808220A (zh) 2021-12-17

Similar Documents

Publication Publication Date Title
WO2023045147A1 (zh) 双目摄像机的标定方法、系统、电子设备和存储介质
TWI555378B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
WO2018076154A1 (zh) 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法
US10609282B2 (en) Wide-area image acquiring method and apparatus
US10803624B2 (en) Apparatus for providing calibration data, camera system and method for obtaining calibration data
KR101666959B1 (ko) 카메라로부터 획득한 영상에 대한 자동보정기능을 구비한 영상처리장치 및 그 방법
US8436904B2 (en) Method and apparatus for calibrating video camera
JP5456020B2 (ja) 情報処理装置および方法
CN105453136B (zh) 使用自动聚焦反馈进行立体侧倾校正的系统、方法及设备
US7023473B2 (en) Camera calibration device and method, and computer system
WO2018068719A1 (zh) 一种图像拼接方法及装置
TWI554976B (zh) 監控系統及其影像處理方法
WO2022127918A1 (zh) 双目相机的立体标定方法、装置、系统及双目相机
WO2019192358A1 (zh) 一种全景视频合成方法、装置及电子设备
JP5442111B2 (ja) 画像から高速に立体構築を行なう方法
KR20200035457A (ko) 이미지 스플라이싱 방법 및 장치, 그리고 저장 매체
US20190385285A1 (en) Image Processing Method and Device
US11282232B2 (en) Camera calibration using depth data
US20220156954A1 (en) Stereo matching method, image processing chip and mobile vehicle
CN102436639A (zh) 一种去除图像模糊的图像采集方法和图像采集系统
CN111340737B (zh) 图像矫正方法、装置和电子系统
WO2019232793A1 (zh) 双摄像头标定方法、电子设备、计算机可读存储介质
CN109920004A (zh) 图像处理方法、装置、标定物组合、终端设备及标定系统
CN115035235A (zh) 三维重建方法及装置
US20240202975A1 (en) Data processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21958258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE