CN114663519A - Multi-camera calibration method and device and related equipment - Google Patents

Multi-camera calibration method and device and related equipment Download PDF

Info

Publication number
CN114663519A
CN114663519A CN202210153434.XA CN202210153434A CN114663519A CN 114663519 A CN114663519 A CN 114663519A CN 202210153434 A CN202210153434 A CN 202210153434A CN 114663519 A CN114663519 A CN 114663519A
Authority
CN
China
Prior art keywords
dimensional
calibrated
camera
external parameters
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210153434.XA
Other languages
Chinese (zh)
Inventor
周浩理
王琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN202210153434.XA priority Critical patent/CN114663519A/en
Publication of CN114663519A publication Critical patent/CN114663519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a multi-camera calibration method, a multi-camera calibration device and related equipment, wherein the multi-camera calibration method comprises the following steps: acquiring two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and preset three-dimensional equipment; performing three-dimensional reconstruction by using the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters and initial point cloud data of each two-dimensional camera to be calibrated, and optimizing the initial point cloud data to obtain sparse three-dimensional point cloud data of a target area; generating actual three-dimensional point cloud data of a target area according to three-dimensional image data acquired by preset three-dimensional equipment; and registering the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, acquiring a transformation matrix between the point clouds, and acquiring actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between the two-dimensional cameras to be calibrated by utilizing the initial internal and external parameters and the transformation matrix. The scheme of the invention is beneficial to multi-camera calibration and improves the efficiency and the accuracy of multi-camera calibration.

Description

Multi-camera calibration method and device and related equipment
Technical Field
The invention relates to the technical field of camera parameter calibration, in particular to a multi-camera calibration method, a multi-camera calibration device and related equipment.
Background
With the development of scientific technology, image data is required to be used in more and more application scenes. At present, the same object or area is usually required to be subjected to image acquisition from different angles through a plurality of cameras, and at the moment, camera calibration is required. The camera calibration is a process for determining internal geometric and optical parameters of a camera and the three-dimensional position and posture of a camera coordinate system relative to a world coordinate system, and is a key for determining the relative relationship between a two-dimensional image and a three-dimensional scene.
In the prior art, camera calibration is generally directly performed through a calibration plate, and the calibration method has the problems that when multi-camera calibration is required (namely when more than two cameras are required to be calibrated), the optical centers of the cameras are inconsistent, two calibration plates are required to be adopted for multi-angle and multi-distance data acquisition, and manual intervention flows are added for calibration, so that the calibration flow is complex, the calibration efficiency is low, and the calibration accuracy is low. Therefore, the scheme in the prior art is not beneficial to multi-camera calibration and improving the efficiency and accuracy of multi-camera calibration.
Disclosure of Invention
The invention mainly aims to provide a multi-camera calibration method, a multi-camera calibration device and related equipment, and aims to solve the problems that in the prior art, a scheme of directly calibrating a camera through a calibration plate is not beneficial to multi-camera calibration and the efficiency and the accuracy of multi-camera calibration are improved.
In order to achieve the above object, a first aspect of the present invention provides a multi-camera calibration method, wherein the multi-camera calibration method includes:
acquiring two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and preset three-dimensional equipment;
performing three-dimensional reconstruction by using the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters and initial point cloud data of each two-dimensional camera to be calibrated, and optimizing the initial point cloud data to obtain sparse three-dimensional point cloud data of the target area;
generating actual three-dimensional point cloud data of the target area according to the three-dimensional image data acquired by the preset three-dimensional equipment;
and registering the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, acquiring a transformation matrix between the point clouds, and acquiring actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between each two-dimensional camera to be calibrated by using the initial internal and external parameters and the transformation matrix.
A second aspect of the present invention provides a multi-camera calibration apparatus, wherein the multi-camera calibration apparatus includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and preset three-dimensional equipment;
the second acquisition module is used for performing three-dimensional reconstruction by using the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters of each two-dimensional camera to be calibrated and initial point cloud data, and optimizing the initial point cloud data to obtain sparse three-dimensional point cloud data of the target area;
a third obtaining module, configured to generate actual three-dimensional point cloud data of the target area according to the three-dimensional image data obtained by the preset three-dimensional device;
and the calibration module is used for registering the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, acquiring a transformation matrix between the point clouds, and acquiring actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between each two-dimensional camera to be calibrated by using the initial internal and external parameters and the transformation matrix.
A third aspect of the present invention provides a multi-camera apparatus, comprising a plurality of two-dimensional cameras and the calibration apparatus mentioned in the second aspect, wherein:
the two-dimensional cameras are used for acquiring two-dimensional image data of the target area;
the calibration device is used for calibrating the two-dimensional cameras by using the two-dimensional image data and three-dimensional image data acquired by preset three-dimensional equipment to acquire internal and external parameters of the two-dimensional cameras and relative external parameters between the two-dimensional cameras.
A fourth aspect of the present invention provides an intelligent terminal, where the intelligent terminal includes a memory, a processor, and a multi-camera calibration program stored in the memory and executable on the processor, and the multi-camera calibration program, when executed by the processor, implements any one of the steps of the multi-camera calibration method.
A fifth aspect of the present invention provides a computer-readable storage medium, having a multi-camera calibration program stored thereon, which when executed by a processor implements any one of the steps of the multi-camera calibration method described above.
From the above, compared with the scheme of directly calibrating the camera through the calibration plate in the prior art, when the multi-camera calibration is carried out based on the scheme of the invention, the calibration plate does not need to be adopted two by two for respective calibration, and the manual intervention flow is not needed to be added, and the calibration can be realized only by acquiring the image data acquired by the two-dimensional camera to be calibrated and the three-dimensional information acquired directly, so that the calibration flow is simplified, the error caused by the manual flow can be avoided, and the efficiency and the accuracy of the multi-camera calibration are improved. Meanwhile, it should be noted that, in the scheme in the prior art, usually, the camera is required to be used for acquiring the target image data (i.e., the image data to be used subsequently) after calibration is performed, but based on the scheme of the present invention, the target image data can be directly acquired even when calibration is not performed, and the target image data is used for calibration, and then the corresponding target image data is used, which is beneficial to improving the user experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a multi-camera calibration method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating the implementation of step S200 in FIG. 1;
FIG. 3 is a flowchart illustrating the implementation of step S400 in FIG. 1;
FIG. 4 is a flowchart illustrating the step S402 in FIG. 3 according to the present invention;
fig. 5 is a schematic flowchart of a multi-camera calibration method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a multi-camera calibration apparatus provided in an embodiment of the present invention;
fig. 7 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted depending on the context to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve at least one problem in the prior art, the invention provides a multi-camera calibration method which does not need to use a calibration plate, but directly utilizes reconstructed three-dimensional point cloud data acquired by a two-dimensional camera to be calibrated and actual three-dimensional point cloud data acquired after three-dimensional information acquisition is directly carried out on a target area to realize calibration of a plurality of two-dimensional cameras to be calibrated.
Exemplary method
As shown in fig. 1, an embodiment of the present invention provides a multi-camera calibration method, and specifically, the multi-camera calibration method includes the following steps:
step S100, acquiring two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and a preset three-dimensional device.
Specifically, a plurality of two-dimensional cameras to be calibrated shoot the same target area from different angles to obtain two-dimensional image data, and three-dimensional equipment acquires three-dimensional image data of the same target area. It should be noted that the image data may be image data only used for calibration, or may be image data that is finally needed to be used and needs to be acquired by the two-dimensional camera to be calibrated. The image data to be used can be directly acquired before the calibration is completed, and the calibration is performed accordingly without performing the calibration in advance, so that the user can use the image data conveniently.
In one embodiment, acquiring two-dimensional image data and three-dimensional image data of the same target area acquired by a plurality of two-dimensional cameras to be calibrated and a preset three-dimensional device includes: continuously shooting the same target area based on a plurality of to-be-calibrated two-dimensional cameras with fixed relative poses to acquire the two-dimensional image data corresponding to each to-be-calibrated two-dimensional camera; and acquiring three-dimensional image data obtained by directly acquiring the target area by the preset three-dimensional equipment, wherein the preset three-dimensional equipment comprises one or more combinations of a three-dimensional scanner, a depth camera and a laser radar.
The two-dimensional cameras to be calibrated are fixed in advance, and in the acquisition process, the relative pose between the two-dimensional cameras to be calibrated cannot change. In this embodiment, the optical centers of the two-dimensional cameras to be calibrated may be the same or different, so that the subsequent calibration effect of registration according to the point cloud data is not affected, and calibration of the optical centers by using a calibration plate is not required, thereby simplifying the calibration process and improving the calibration efficiency.
In an embodiment, the two-dimensional camera to be calibrated is an RGB camera, and the image data is RGB image data, so that the obtained image data can be directly applied to subsequent processing procedures (such as face recognition, human skeleton reconstruction, and the like) to meet the user requirements. In an actual use process, the two-dimensional camera to be calibrated may also be a grayscale camera, an infrared camera, or the like, which is not specifically limited herein.
In an embodiment, the multiple two-dimensional cameras to be calibrated can be used for synchronously acquiring the same target area at the same time to generate image data corresponding to the two-dimensional cameras to be calibrated at the same time; the two-dimensional cameras to be calibrated can be controlled to respectively and continuously acquire the target area according to the same time sequence, and image data corresponding to the two-dimensional cameras to be calibrated are acquired, namely, the two-dimensional cameras to be calibrated synchronously acquire the image data corresponding to a plurality of moments, so that the situation that the features in the target area acquired during single-frame acquisition are not obvious is prevented, and the robustness of the result is improved. It should be noted that, in the continuous acquisition process, the scene of the target area does not change, and the pose of each two-dimensional camera to be calibrated does not change.
In one embodiment, the three-dimensional information of the target area is directly acquired based on a three-dimensional device, wherein the three-dimensional device includes a depth camera and/or a laser radar, the acquired image data is three-dimensional image data, and the three-dimensional image data can directly acquire the depth information of the target area relative to two-dimensional image data acquired by a two-dimensional camera to be calibrated. It should be noted that, in this embodiment, the three-dimensional device includes any one or a combination of a depth camera, a laser radar, and a three-dimensional scanner, where the depth camera may be a camera based on the principles of structured light, TOF, or binocular, and other devices capable of acquiring three-dimensional information may also be used in the actual use process, and are not limited specifically here.
And S200, performing three-dimensional reconstruction by using the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters of each two-dimensional camera to be calibrated and initial point cloud data, and optimizing the initial point cloud data to obtain sparse three-dimensional point cloud data of the target area.
Specifically, after obtaining image data corresponding to each two-dimensional camera to be calibrated, scene three-dimensional reconstruction may be performed on the target area based on a preset three-dimensional reconstruction algorithm to obtain initial point cloud data.
In an embodiment, as shown in fig. 2, the step S200 specifically includes the following steps:
step S201, performing three-dimensional reconstruction on the two-dimensional image data acquired by the two-dimensional cameras to be calibrated at the same time through a preset sfm (structure From motion) algorithm to obtain initial point cloud data and acquire initial internal and external parameters of the two-dimensional cameras to be calibrated.
In this embodiment, a preset SFM algorithm is used to perform three-dimensional reconstruction on image data acquired by each two-dimensional camera to be calibrated at the same time to obtain initial point cloud data, and initial internal and external parameters corresponding to each two-dimensional camera to be calibrated are acquired based on the initial point cloud data. It should be noted that the internal reference is used to represent internal parameters, such as focal length, pixel size, and the like, and the external reference is used to represent the relative pose between the cameras; the SFM algorithm is an off-line algorithm for performing three-dimensional reconstruction based on various collected disordered pictures, and in this embodiment, the SFM algorithm is used, but not limited to specific limitations, and other three-dimensional reconstruction algorithms may be used in the actual use process.
Step S202, global optimization is carried out on the initial internal and external parameters by utilizing the two-dimensional image data, and the non-scale internal and external parameters of the two-dimensional camera to be calibrated are obtained.
In one embodiment, for image data acquired according to a preset time sequence, after acquiring initial internal parameters and initial external parameters, data acquired by each two-dimensional camera to be calibrated at the same time are bundled into a group of data, BA (bundle adjustment) optimization is performed on the initial internal parameters and the initial external parameters by using each group of bundled data, internal and external parameters corresponding to each group are obtained, and an average value is calculated to obtain accurate non-scale internal and external parameters, namely the non-scale internal parameters and the non-scale external parameters. It should be noted that "no scale" in the no-scale internal reference and the no-scale external reference means that the amount of translation in the amount of displacement between two frames of images is unknown, that is, the specific unit of the amount of translation is unknown, and the normalization result is obtained.
In one embodiment, if the two-dimensional cameras to be calibrated only acquire single-frame image data at the same time, grouping, binding and average value calculation are not needed, and the internal and external parameters after BA optimization are directly used as corresponding non-scale internal parameters and non-scale external parameters.
Step S203, processing the initial point cloud data by using the non-scale internal and external parameters of each two-dimensional camera to be calibrated to obtain the sparse three-dimensional point cloud data of the target area.
More specifically, triangularization optimization processing is carried out on the initial point cloud data according to the non-scale internal and external parameters of each two-dimensional camera to be calibrated, and sparse three-dimensional point cloud data of a target area are obtained. The point cloud data is sparse, because points in the point cloud data are all feature points extracted from the same target area, and the number of the feature points which can be extracted based on the two-dimensional image data is small, the sparse point cloud data which can be formed after the three-dimensional reconstruction of the two-dimensional image data is small, the storage and calculation resources occupied by the sparse three-dimensional point cloud data are small, the calculation efficiency is improved, and the calibration efficiency is improved.
Step 300, generating actual three-dimensional point cloud data of the target area according to the three-dimensional image data acquired by the preset three-dimensional device.
In one embodiment, three-dimensional image data of the same scene (a target area acquired by a two-dimensional camera to be calibrated) acquired by a preset three-dimensional device is reconstructed according to a preset three-dimensional reconstruction algorithm, and actual three-dimensional point cloud data of the target area is acquired. The actual three-dimensional point cloud data of the target area refers to three-dimensional point cloud data obtained through directly acquired three-dimensional information, And in this embodiment, a slam (simultaneous Localization And mapping) algorithm is adopted, but not specifically limited.
And S400, registering the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, acquiring a transformation matrix between the point clouds, and acquiring actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between each two-dimensional camera to be calibrated by using the initial internal and external parameters and the transformation matrix.
The sparse three-dimensional point cloud data and the actual three-dimensional point cloud data are three-dimensionally reflected to the scene of the target area, and the scene is not changed, so that the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data contain the same feature points, and registration can be performed on the basis of the feature points, so that internal references of the two-dimensional cameras to be calibrated and pose relations (relative references) among the cameras are obtained.
In this embodiment, as shown in fig. 3, the step S400 specifically includes the following steps:
step S401, taking the actual three-dimensional point cloud data as a target point cloud and the sparse three-dimensional point cloud data as registration point clouds, and acquiring a transformation matrix between the point clouds.
Specifically, in this embodiment, the actual three-dimensional Point cloud data and the sparse three-dimensional Point cloud data are subjected to scaled nearest Iteration (ICP) registration to obtain the final scaled transformation matrix SRT between each two-dimensional camera to be calibrated and the three-dimensional device. The transformation matrix SRT is a 4 x 4 matrix, and can be used to indicate the relative pose transformation of the image coordinate system of the two-dimensional camera to be calibrated with respect to the depth camera (or lidar). The image coordinate system is constructed based on image data of all the two-dimensional cameras to be calibrated. It should be noted that, the most recent iteration algorithm is adopted in this embodiment, and other algorithms may also be adopted in the actual use process, which is not specifically limited herein.
Step S402, acquiring target internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters among the two-dimensional cameras to be calibrated according to the transformation matrix and the non-scale internal and external parameters of each two-dimensional camera to be calibrated.
Further, as shown in fig. 4, the step S402 specifically includes the following steps:
s4021, acquiring relative external parameters among the two-dimensional cameras to be calibrated according to the transformation matrix among the point clouds;
step S4022, optimizing the non-scale internal and external parameters between the two-dimensional cameras to be calibrated by using the relative external parameters between the two-dimensional cameras to be calibrated, and acquiring the actual internal and external parameters of the two-dimensional cameras to be calibrated.
More specifically, a transformation matrix of the poses between the point clouds is a parameter obtained by converting sparse three-dimensional point cloud data into a coordinate system of the three-dimensional equipment, that is, the sparse three-dimensional point cloud data is converted into the same coordinate system to obtain each pose relationship between each two-dimensional camera to be calibrated and the three-dimensional equipment, based on the each pose relationship, image data acquired by each two-dimensional camera to be calibrated is mapped into the coordinate system of the three-dimensional equipment to obtain image data corresponding to each two-dimensional camera to be calibrated in the coordinate system of the three-dimensional equipment, and relative pose parameters between each two-dimensional camera to be calibrated are obtained in the coordinate system; and further optimizing the scale-free internal and external parameters among the two-dimensional cameras to be calibrated based on the relative pose parameters to obtain the internal and external parameters, namely the actual internal and external parameters, of the two-dimensional cameras to be calibrated with scales.
It should be noted that the actual internal and external parameters and the relative external parameters are finally obtained internal parameters of each two-dimensional camera to be calibrated and a relative pose between each two-dimensional camera to be calibrated, respectively. Through actual internal and external parameters and relative external parameters, the corresponding coordinates of each pixel point in the world coordinate system in the image acquired by each two-dimensional camera to be calibrated and the position relation between each two-dimensional camera to be calibrated can be obtained, and the calibration of all two-dimensional cameras to be calibrated is realized.
Therefore, in the embodiment, the calibration of the plurality of two-dimensional cameras to be calibrated can be quickly and simply realized by combining the image data acquired by the two-dimensional cameras to be calibrated and the three-dimensional information directly acquired by the target area. The calibration method has the advantages that a calibration plate is not needed, the calculation process is simple, manual intervention procedures are not needed, calibration can be achieved only by acquiring image data acquired by the two-dimensional camera to be calibrated and directly acquiring three-dimensional information, the calibration procedures are simplified, errors caused by manual procedures can be avoided, and the efficiency and accuracy of multi-camera calibration are improved.
Fig. 5 is a schematic flow chart of a specific application scenario of the multi-camera calibration method provided in an embodiment of the present invention, in the specific application scenario, a plurality of RGB cameras (i.e., two-dimensional cameras to be calibrated) are fixed in advance, and in the whole process of data acquisition and calibration, each RGB camera is fixed and does not change relative pose. Specifically, a plurality of RGB cameras are used to capture successive RGB data within a target area. And then, initially obtaining the internal and external initial parameters of each RGB camera by using an SFM reconstruction algorithm. Based on the obtained initial internal and external parameters, data collected by a plurality of RGB cameras at the same time are bundled into a group of data (namely, a frame of data), BA optimization is carried out, accurate scale-free internal and external parameters (namely, scale-free internal parameters and scale-free external parameters) are obtained, and meanwhile, a sparse point cloud1 (namely, three-dimensional point cloud data generated by binding data collected by the RGB cameras after SFM algorithm and BA optimization) can also be obtained and output.
It should be noted that, when multiple groups of internal and external parameters are obtained by performing BA optimization on multiple groups of data, the average values of the internal parameters and the average values of the external parameters are used as corresponding non-scale internal and external parameters, so as to improve the calculation accuracy. On the other hand, data of the same scene acquired by a plurality of RGB cameras is acquired based on a preset three-dimensional device. And performing SLAM reconstruction on the collected continuous three-dimensional data to obtain and output point cloud data cloud2 (namely actual three-dimensional point cloud data) of the scene. And then registering cloud1 and cloud2, specifically, performing scaled ICP registration on the sparse point cloud1 and the point cloud2, and solving a final scaled transformation matrix SRT by taking cloud2 as a target point cloud and cloud1 as a registration point cloud. And obtaining internal and external parameters of the RGB cameras and relative external parameters among the cameras based on the transformation matrix SRT of the relative pose and the non-scale internal and external parameters.
Therefore, in the invention, the directly acquired three-dimensional information is used for calibration to obtain the internal and external parameters corresponding to the two-dimensional cameras to be calibrated, the method is not limited by places and conditions, a calibration plate is not required to be used for carrying out complex measurement, manual intervention is not required, the operation is more convenient, and the calibration efficiency and the calibration accuracy are favorably improved.
Exemplary device
As shown in fig. 6, corresponding to the multi-camera calibration method, an embodiment of the present invention further provides a multi-camera calibration apparatus, including:
the first obtaining module 510 is configured to obtain two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and a preset three-dimensional device.
Specifically, a plurality of two-dimensional cameras to be calibrated shoot the same target area from different angles to obtain two-dimensional image data, and three-dimensional equipment acquires three-dimensional image data of the same target area. It should be noted that the image data may be image data only used for calibration, or may be image data that needs to be finally used and acquired by the two-dimensional camera to be calibrated. The image data to be used can be directly acquired before the calibration is completed, and the calibration is performed accordingly without performing the calibration in advance, so that the user can use the image data conveniently.
In one embodiment, the three-dimensional information of the target area is directly acquired based on three-dimensional equipment, wherein the three-dimensional equipment comprises a depth camera and/or a laser radar, the acquired image data is three-dimensional image data, and the three-dimensional image data can directly acquire the depth information of the target area relative to two-dimensional image data acquired by a two-dimensional camera to be calibrated. It should be noted that, in this embodiment, the three-dimensional device includes any one or a combination of a depth camera, a laser radar, and a three-dimensional scanner, where the depth camera may be a camera based on the principles of structured light, TOF, or binocular, and other devices capable of acquiring three-dimensional information may also be used in the actual use process, and are not limited specifically here.
A second obtaining module 520, configured to perform three-dimensional reconstruction on the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters and initial point cloud data of each two-dimensional camera to be calibrated, and optimize the initial point cloud data to obtain sparse three-dimensional point cloud data of the target area.
Specifically, after image data corresponding to each two-dimensional camera to be calibrated is obtained, scene three-dimensional reconstruction can be performed on a target area based on a preset three-dimensional reconstruction algorithm, and initial point cloud data are obtained.
A third obtaining module 530, configured to generate actual three-dimensional point cloud data of the target area according to the three-dimensional image data obtained by the preset three-dimensional device.
In one embodiment, three-dimensional image data of the same scene (a target area acquired by a two-dimensional camera to be calibrated) acquired by a preset three-dimensional device is reconstructed according to a preset three-dimensional reconstruction algorithm, and actual three-dimensional point cloud data of the target area is acquired. The actual three-dimensional point cloud data of the target area refers to three-dimensional point cloud data obtained through directly acquired three-dimensional information, And in this embodiment, a slam (simultaneous Localization And mapping) algorithm is adopted, but not specifically limited.
And a calibration module 540, configured to register the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, obtain a transformation matrix between point clouds, and obtain actual extrinsic parameters of each two-dimensional camera to be calibrated and relative extrinsic parameters between the two-dimensional cameras to be calibrated by using the initial extrinsic parameters and the transformation matrix.
The sparse three-dimensional point cloud data and the actual three-dimensional point cloud data are used for three-dimensionally reflecting the scene of the target area, and the scene is not changed, so that the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data contain the same characteristic points, and registration can be performed on the basis of the characteristic points, so that the pose relationship (relative external reference) between the internal reference and the pose relationship between the cameras of the two-dimensional cameras to be calibrated are obtained.
Specifically, in this embodiment, the specific functions of the multi-camera calibration device and each module thereof may also refer to the corresponding descriptions in the multi-camera calibration method, which are not described herein again.
Based on the above embodiment, the present invention further provides a multi-camera apparatus, including a plurality of two-dimensional cameras and the above calibration apparatus, wherein:
the two-dimensional cameras are used for acquiring two-dimensional image data of the target area;
the calibration device is used for calibrating the two-dimensional cameras by using the two-dimensional image data and three-dimensional image data acquired by preset three-dimensional equipment to acquire internal and external parameters of the two-dimensional cameras and relative external parameters between the two-dimensional cameras.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 7. The intelligent terminal comprises a processor and a memory. The memory of the intelligent terminal comprises a multi-camera calibration program, and the memory provides an environment for the operation of the multi-camera calibration program. The multi-camera calibration program, when executed by the processor, implements the steps of any of the multi-camera calibration methods described above. It should be noted that the above-mentioned intelligent terminal may further include other functional modules or units, which are not specifically limited herein.
It will be understood by those skilled in the art that the block diagram of fig. 7 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and in particular, the intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In an embodiment, the multi-camera calibration program performs the following operations when executed by the processor:
acquiring two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and preset three-dimensional equipment;
performing three-dimensional reconstruction by using the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters and initial point cloud data of each two-dimensional camera to be calibrated, and optimizing the initial point cloud data to obtain sparse three-dimensional point cloud data of the target area;
generating actual three-dimensional point cloud data of the target area according to the three-dimensional image data acquired by the preset three-dimensional equipment;
and registering the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, acquiring a transformation matrix between the point clouds, and acquiring actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between each two-dimensional camera to be calibrated by utilizing the initial internal and external parameters and the transformation matrix.
The embodiment of the present invention further provides a computer-readable storage medium, where a multi-camera calibration program is stored on the computer-readable storage medium, and when the multi-camera calibration program is executed by a processor, the steps of any one of the multi-camera calibration methods provided in the embodiment of the present invention are implemented.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical division, and the actual implementation may be implemented by another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and can implement the steps of the embodiments of the method when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the contents of the computer-readable storage medium can be increased or decreased as required by the legislation and patent practice in the jurisdiction.
The above-mentioned embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (11)

1. A multi-camera calibration method, comprising:
acquiring two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and preset three-dimensional equipment;
performing three-dimensional reconstruction by using the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters and initial point cloud data of each two-dimensional camera to be calibrated, and optimizing the initial point cloud data to obtain sparse three-dimensional point cloud data of the target area;
generating actual three-dimensional point cloud data of the target area according to the three-dimensional image data acquired by the preset three-dimensional equipment;
and registering the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, acquiring a transformation matrix between the point clouds, and acquiring actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between each two-dimensional camera to be calibrated by using the initial internal and external parameters and the transformation matrix.
2. The multi-camera calibration method according to claim 1, wherein the acquiring two-dimensional image data and three-dimensional image data of the same target area acquired by a plurality of two-dimensional cameras to be calibrated and a preset three-dimensional device comprises:
continuously shooting the same target area based on a plurality of to-be-calibrated two-dimensional cameras with fixed relative poses to acquire two-dimensional image data corresponding to each to-be-calibrated two-dimensional camera;
and acquiring three-dimensional image data obtained by directly acquiring the target area by the preset three-dimensional equipment, wherein the preset three-dimensional equipment comprises one or more combinations of a three-dimensional scanner, a depth camera and a laser radar.
3. The multi-camera calibration method according to claim 1, wherein the three-dimensional reconstruction using the two-dimensional image data collected by the two-dimensional cameras to be calibrated to obtain initial internal and external parameters and initial point cloud data of the two-dimensional cameras to be calibrated, and the optimization of the initial point cloud data to obtain sparse three-dimensional point cloud data of the target region comprises:
performing three-dimensional reconstruction on the two-dimensional image data acquired by the two-dimensional cameras to be calibrated at the same moment through a preset SFM algorithm to obtain initial point cloud data and acquire initial internal and external parameters of the two-dimensional cameras to be calibrated;
performing global optimization on the initial internal and external parameters by using the two-dimensional image data to obtain scale-free internal and external parameters of the two-dimensional camera to be calibrated;
and processing the initial point cloud data by using the non-scale internal and external parameters of each two-dimensional camera to be calibrated to obtain the sparse three-dimensional point cloud data of the target area.
4. The multi-camera calibration method according to claim 3, wherein the processing the initial point cloud data by using the non-scale extrinsic parameters of each two-dimensional camera to be calibrated to obtain the sparse three-dimensional point cloud data of the target region comprises:
and carrying out triangularization optimization processing on the initial point cloud data according to the non-scale internal and external parameters of each two-dimensional camera to be calibrated to obtain the sparse three-dimensional point cloud data of the target area.
5. The multi-camera calibration method according to claim 4, wherein the obtaining the actual internal and external parameters of each two-dimensional camera to be calibrated and the relative external parameters between each two-dimensional camera to be calibrated by using the initial internal and external parameters and the transformation matrix comprises:
and obtaining the actual internal and external parameters of each two-dimensional camera to be calibrated and the relative external parameters between each two-dimensional camera to be calibrated by using the scale-free internal and external parameters of each two-dimensional camera to be calibrated obtained after the initial internal and external parameters are optimized and the transformation matrix.
6. The multi-camera calibration method according to claim 5, wherein the registering the sparse three-dimensional point cloud data with the actual three-dimensional point cloud data, obtaining a transformation matrix between point clouds, and obtaining actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between each two-dimensional camera to be calibrated by using the initial internal and external parameters and the transformation matrix comprises:
taking the actual three-dimensional point cloud data as a target point cloud, taking the sparse three-dimensional point cloud data as a registration point cloud, and acquiring a transformation matrix between the point clouds;
and acquiring target internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between the two-dimensional cameras to be calibrated according to the transformation matrix and the non-scale internal and external parameters of each two-dimensional camera to be calibrated.
7. The multi-camera calibration method according to claim 6, wherein the obtaining of the target internal and external parameters of each two-dimensional camera to be calibrated and the relative external parameters between each two-dimensional camera to be calibrated according to the transformation matrix and the non-scale internal and external parameters of each two-dimensional camera to be calibrated comprises:
acquiring relative external parameters among the two-dimensional cameras to be calibrated according to the transformation matrix among the point clouds;
and optimizing the non-scale internal and external parameters between the two-dimensional cameras to be calibrated by using the relative external parameters between the two-dimensional cameras to be calibrated to obtain the actual internal and external parameters of the two-dimensional cameras to be calibrated.
8. A multi-camera calibration device, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring two-dimensional image data and three-dimensional image data of the same target area, which are acquired by a plurality of two-dimensional cameras to be calibrated and preset three-dimensional equipment;
the second acquisition module is used for performing three-dimensional reconstruction by using the two-dimensional image data acquired by each two-dimensional camera to be calibrated to obtain initial internal and external parameters of each two-dimensional camera to be calibrated and initial point cloud data, and optimizing the initial point cloud data to obtain sparse three-dimensional point cloud data of the target area;
the third acquisition module is used for generating actual three-dimensional point cloud data of the target area according to the three-dimensional image data acquired by the preset three-dimensional equipment;
and the calibration module is used for registering the sparse three-dimensional point cloud data and the actual three-dimensional point cloud data, acquiring a transformation matrix between the point clouds, and acquiring actual internal and external parameters of each two-dimensional camera to be calibrated and relative external parameters between each two-dimensional camera to be calibrated by using the initial internal and external parameters and the transformation matrix.
9. A multi-camera apparatus comprising a plurality of two-dimensional cameras and the calibration apparatus of claim 8, wherein:
the plurality of two-dimensional cameras are used for acquiring two-dimensional image data of a target area;
the calibration device is used for calibrating the two-dimensional cameras by using the two-dimensional image data and three-dimensional image data acquired by preset three-dimensional equipment to acquire internal and external parameters of the two-dimensional cameras and relative external parameters between the two-dimensional cameras.
10. An intelligent terminal, characterized in that the intelligent terminal comprises a memory, a processor and a multi-camera calibration program stored on the memory and executable on the processor, the multi-camera calibration program when executed by the processor implementing the steps of the multi-camera calibration method as claimed in any one of claims 1 to 7.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a multi-camera calibration program, which when executed by a processor implements the steps of the multi-camera calibration method as claimed in any one of claims 1-7.
CN202210153434.XA 2022-02-18 2022-02-18 Multi-camera calibration method and device and related equipment Pending CN114663519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210153434.XA CN114663519A (en) 2022-02-18 2022-02-18 Multi-camera calibration method and device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210153434.XA CN114663519A (en) 2022-02-18 2022-02-18 Multi-camera calibration method and device and related equipment

Publications (1)

Publication Number Publication Date
CN114663519A true CN114663519A (en) 2022-06-24

Family

ID=82027179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210153434.XA Pending CN114663519A (en) 2022-02-18 2022-02-18 Multi-camera calibration method and device and related equipment

Country Status (1)

Country Link
CN (1) CN114663519A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239776A (en) * 2022-07-14 2022-10-25 阿波罗智能技术(北京)有限公司 Point cloud registration method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239776A (en) * 2022-07-14 2022-10-25 阿波罗智能技术(北京)有限公司 Point cloud registration method, device, equipment and medium
CN115239776B (en) * 2022-07-14 2023-07-28 阿波罗智能技术(北京)有限公司 Point cloud registration method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111563923B (en) Method for obtaining dense depth map and related device
CN109949899B (en) Image three-dimensional measurement method, electronic device, storage medium, and program product
CN107223269B (en) Three-dimensional scene positioning method and device
CN111340864A (en) Monocular estimation-based three-dimensional scene fusion method and device
CN110070598B (en) Mobile terminal for 3D scanning reconstruction and 3D scanning reconstruction method thereof
US11100669B1 (en) Multimodal three-dimensional object detection
US20190096092A1 (en) Method and device for calibration
CN111160232B (en) Front face reconstruction method, device and system
CN110738703B (en) Positioning method and device, terminal and storage medium
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
CN111372005A (en) Automatic exposure compensation method and system for TOF camera module
CN115187676A (en) High-precision line laser three-dimensional reconstruction calibration method
CN113506372A (en) Environment reconstruction method and device
CN112634379A (en) Three-dimensional positioning measurement method based on mixed vision field light field
WO2022218161A1 (en) Method and apparatus for target matching, device, and storage medium
CN110007764B (en) Gesture skeleton recognition method, device and system and storage medium
CN114663519A (en) Multi-camera calibration method and device and related equipment
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN116704125A (en) Mapping method, device, chip and module equipment based on three-dimensional point cloud
CN116778091A (en) Deep learning multi-view three-dimensional reconstruction algorithm based on path aggregation
CN114494383B (en) Light field depth estimation method based on Richard-Lucy iteration
CN113034615B (en) Equipment calibration method and related device for multi-source data fusion
CN114926316A (en) Distance measuring method, distance measuring device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination