CN113870358B - Method and equipment for jointly calibrating multiple 3D cameras - Google Patents

Method and equipment for jointly calibrating multiple 3D cameras Download PDF

Info

Publication number
CN113870358B
CN113870358B CN202111091893.1A CN202111091893A CN113870358B CN 113870358 B CN113870358 B CN 113870358B CN 202111091893 A CN202111091893 A CN 202111091893A CN 113870358 B CN113870358 B CN 113870358B
Authority
CN
China
Prior art keywords
point cloud
cameras
calibration
cloud data
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111091893.1A
Other languages
Chinese (zh)
Other versions
CN113870358A (en
Inventor
陈春朋
刘帅
许瀚誉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202111091893.1A priority Critical patent/CN113870358B/en
Publication of CN113870358A publication Critical patent/CN113870358A/en
Application granted granted Critical
Publication of CN113870358B publication Critical patent/CN113870358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to the technical field of multi-camera calibration, and provides a method and equipment for jointly calibrating a plurality of 3D cameras. Specifically, the 3D cameras collect depth images of the calibration object from different view angles, the point cloud data set extracted from each depth image is respectively subjected to feature matching with the point cloud data set of the three-dimensional model of the pre-stored calibration object, the 3D camera coordinate systems are integrated under the calibration object coordinate system based on the matching result, the initial calibration of the 3D cameras is realized, further, the calibration object coordinate system is taken as a reference, matching the point cloud data sets corresponding to the two-camera with the public view, and determining the pose relation between the two-camera coordinate systems, so that a plurality of 3D cameras are unified under the coordinate system of one 3D camera, the pose between the two-camera in the plurality of 3D cameras is globally optimized, the local accumulated calibration error between the two-camera is reduced, and the calibration precision is improved.

Description

Method and equipment for jointly calibrating multiple 3D cameras
Technical Field
The application relates to the technical field of multi-camera calibration, in particular to a method and equipment for jointly calibrating a plurality of 3D cameras.
Background
In a system of multiple cameras, each camera has an independent coordinate system, referred to as a camera coordinate system. The image shot by the camera takes a camera coordinate system as a coordinate origin. When images acquired by each camera in the system are processed, a plurality of independent camera coordinate systems are required to be aligned under a common coordinate system, so that the convenience of image processing is improved, and the process of converting the independent camera coordinate systems under the common coordinate system is called multi-camera calibration.
Application scenarios for multi-camera calibration are relatively wide, for example: the volume measurement, three-dimensional reconstruction, motion capture and other scenes can be calibrated by using multiple cameras.
Currently, multi-camera calibration is mostly aimed at a system consisting of 2D cameras. Based on the characteristics of the 2D cameras, before calibrating the conversion relation of the multi-camera coordinate system, the internal parameters of each camera are required to be calibrated, and the external parameters of the multi-camera are calibrated on the basis of obtaining the internal parameters of the cameras. The calibration method has the following defects: 1) The calibration plate is required to do rigid motion before different camera visual angles so as to complete the calibration of the internal and external parameters of the camera; 2) The cameras have a larger common field of view, so that the layout of the cameras is limited; 3) The imaging effect of the corner points of the calibration plate in the camera is extremely easily influenced by ambient light and placement positions, so that the detection precision of the corner points is not high or the detection fails, and the calibration result is influenced; 4) The rigid movement of the calibration plate is controlled manually, the movement angle of the calibration plate is difficult to master, and the mechanical control calibration plate is subjected to rigid movement (for example, a turntable is arranged below the calibration plate), so that the automatic switching of the calibration plate among a plurality of cameras can be realized, but the calibration cost is increased.
Unlike 2D cameras, 3D cameras can acquire depth information without calibrating internal parameters. Therefore, the calibration method of multiple 2D cameras is not suitable for calibrating multiple 3D cameras, and a method for calibrating multiple 3D cameras needs to be provided.
Disclosure of Invention
The application provides a method and equipment for jointly calibrating a plurality of 3D cameras, which are used for jointly calibrating the plurality of 3D cameras.
In a first aspect, an embodiment of the present application provides a method for jointly calibrating a plurality of 3D cameras, including:
acquiring depth images of a calibration object respectively acquired by a plurality of 3D cameras under different visual angles, and respectively extracting point cloud data sets respectively corresponding to the depth images;
For any one of the extracted point cloud data sets, feature vectors of all the point cloud pairs in the one point cloud data set are respectively matched with feature vectors of all the point cloud pairs in the point cloud data set of the three-dimensional model of the calibration object, a plurality of transformation matrixes are determined, and the plurality of transformation matrixes are clustered to obtain a target transformation matrix of the one point cloud data set;
And determining a calibration matrix between two cameras in the plurality of 3D cameras according to the target transformation matrix corresponding to each point cloud data set so as to realize joint calibration of the plurality of 3D cameras.
In a second aspect, an embodiment of the present application provides a calibration device, including a processor, a memory, and at least one external communication interface, where the processor, the memory, and the external communication interface are all connected by a bus;
The external communication interface is configured to be connected with depth images of the calibration objects acquired by the plurality of 3D cameras under different visual angles;
The memory has stored therein a computer program, the processor being configured to perform the following operations based on the computer program:
acquiring depth images of a calibration object respectively acquired by a plurality of 3D cameras under different visual angles, and respectively extracting point cloud data sets respectively corresponding to the depth images;
For any one of the extracted point cloud data sets, feature vectors of all the point cloud pairs in the one point cloud data set are respectively matched with feature vectors of all the point cloud pairs in the point cloud data set of the three-dimensional model of the calibration object, a plurality of transformation matrixes are determined, and the plurality of transformation matrixes are clustered to obtain a target transformation matrix of the one point cloud data set;
And determining a calibration matrix between two cameras in the plurality of 3D cameras according to the target transformation matrix corresponding to each point cloud data set so as to realize joint calibration of the plurality of 3D cameras.
In a third aspect, an embodiment of the present application provides a calibration apparatus, including:
The acquisition module is used for acquiring depth images of the calibration objects acquired by the 3D cameras under different visual angles, and respectively extracting point cloud data sets corresponding to the depth images;
The matching module is used for carrying out feature matching on the feature vector of each point cloud pair in the point cloud data set and the feature vector of each point cloud pair in the three-dimensional model of the calibration object, which are stored in advance, aiming at any one point cloud data set in the extracted point cloud data sets, determining a plurality of transformation matrixes, and clustering the plurality of transformation matrixes to obtain a target transformation matrix of the point cloud data set; and determining a calibration matrix between two cameras in the plurality of 3D cameras according to the target transformation matrix corresponding to each point cloud data set so as to realize joint calibration of the plurality of 3D cameras.
Optionally, the calibration device further includes a local optimization module, configured to update the calibration matrix between the two phases according to the distance between the data points in the point cloud data set corresponding to the two phases.
Optionally, the local optimization module is specifically configured to:
According to the first point cloud data set and the second point cloud data set, an initial pose error function is adjusted in an iterative mode until a preset convergence condition is met, a target pose error function is obtained, a target pose matrix determined by the target pose error function is used as an updated calibration matrix, and each round of iteration executes the following operations:
selecting target data points closest to each source data point in the first point cloud data set from the second point cloud data set;
Determining a pose matrix according to the source data points and the corresponding target data points by adopting an initial pose error function;
Updating each source data point in the first point cloud data set according to the determined pose matrix, and determining the average distance between each updated source data point and each target data point;
And determining a target pose error function according to the average distance.
Optionally, the calibration device further includes a global optimization module, configured to perform loop detection on the calibration matrix between two cameras in the plurality of 3D cameras based on each point cloud data set, so as to reduce a global pose error of the calibration matrix between the two cameras.
Optionally, the global pose error is formulated as follows:
Wherein m and N respectively represent point cloud data sets corresponding to an mth view angle and an nth view angle, K represents the number of view angles, N mn represents the number of homonymous point pairs in an overlapping area in the point cloud data sets corresponding to the mth view angle and the nth view angle, P mi and P ni respectively represent data points in the point cloud data sets of the mth view angle and the nth view angle of the ith pair of homonymous point pairs, R m and t m respectively represent a rotation matrix and a translation vector for converting the point cloud data sets of the mth view angle into a reference coordinate system, and R n and t n respectively represent a rotation matrix and a translation vector for converting the point cloud data sets of the nth view angle into the reference coordinate system, wherein the reference coordinate system is the coordinate system of the calibration object.
Optionally, the matching module is specifically configured to:
For any one of the point cloud pairs, the following operations are performed:
Determining a distance between the first data point and the second data point, a first angle between a normal vector of the first data point and a direction vector of the first data point and the second data point, a second angle between a normal vector of the second data point and a direction vector of the first data point and the second data point, and a third angle between a normal vector of the first data point and a normal vector of the second data point;
And determining the feature vector of the point cloud pair according to the distance, the first included angle, the second included angle and the third included angle.
Optionally, an overlapping area of the calibration object exists in the depth image acquired by the two-phase camera.
Optionally, the calibration object has at least the following characteristics:
the calibration object is provided with a plurality of surfaces, and the characteristics of different surfaces are different;
the area occupied by the calibration object in each depth image is larger than a set threshold value;
the material of the calibration object is a rigid material;
and each surface of the calibration object is subjected to diffuse reflection treatment.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where computer executable instructions are stored, where the computer executable instructions are configured to cause a computer to perform a method for joint calibration of a plurality of 3D cameras provided by the embodiment of the present application.
In the embodiment of the application, a plurality of 3D cameras acquire depth images of a calibration object from different view angles, the depth images acquired from the view angles are ensured to be not identical, feature vectors of all point cloud pairs in the point cloud data set are respectively matched with feature vectors of all point cloud pairs in the point cloud data set of a pre-stored three-dimensional model according to the point cloud data set extracted from each depth image, a plurality of transformation matrixes are determined, a target transformation matrix of the point cloud data set is obtained after clustering, so that 6D pose of a camera coordinate system under the corresponding view angle relative to a calibration object coordinate system is obtained, and further, calibration matrixes between two cameras in the plurality of 3D cameras can be determined based on the target transformation matrix of the camera coordinate system under the view angles relative to the calibration object coordinate system, thereby realizing joint calibration of the plurality of 3D cameras. In the scheme, each camera automatically acquires depth images of different visual angles of the calibration object, and manual participation is not needed; and the conversion relation (namely, camera external parameters) between two-phase coordinate systems in a plurality of 3D cameras is directly calibrated by utilizing the characteristics of the 3D cameras, so that the calibration efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 schematically illustrates a system architecture diagram provided by an embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for joint calibration of multiple 3D cameras provided by an embodiment of the application;
Fig. 3 schematically illustrates a point cloud to feature vector extraction schematic diagram according to an embodiment of the present application;
FIG. 4a illustrates a relationship diagram between two-by-two 3D cameras provided by an embodiment of the present application;
FIG. 4b illustrates a relationship diagram between another two-by-two 3D cameras provided by an embodiment of the present application;
FIG. 5 is a diagram schematically illustrating a preliminary calibration result provided by an embodiment of the present application;
FIG. 6 is a flow chart illustrating a local iterative optimization method provided by an embodiment of the present application;
FIG. 7 is a graph schematically illustrating calibration results after local optimization provided by an embodiment of the present application;
FIG. 8 illustrates an overall frame diagram provided by an embodiment of the present application;
FIG. 9 is a hardware configuration diagram of a calibration device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, embodiments and advantages of the present application more apparent, an exemplary embodiment of the present application will be described more fully hereinafter with reference to the accompanying drawings in which exemplary embodiments of the application are shown, it being understood that the exemplary embodiments described are merely some, but not all, of the examples of the application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the appended claims. Furthermore, while the present disclosure has been described in terms of an exemplary embodiment or embodiments, it should be understood that each aspect of the disclosure can be practiced separately from the other aspects.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this disclosure refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
In order to clearly describe the embodiments of the present application, explanation is given below for terms in the present application.
Parameters within camera: is determined by the hardware of the camera itself and is irrelevant to the placement position of the camera. Mainly comprises (f,sx,sy,u0,v0,k1,k2,k3,p1,p2)., wherein f represents the focal length of the lens; (u 0,v0) represents the projection position of the optical axis on the imaging chip, i.e., the coordinates of the optical axis in the pixel coordinate system; (s x,sy) represents the physical size of a single pixel of the camera chip in pixels/mm; (k 1,k2,k3) represents radial distortion, lens radial imaging errors due to lens machining and mounting; (p 1,p2) represents tangential distortion, tangential imaging error of the lens due to lens processing and mounting. The camera internal parameters are calibrated, and the distortion correction of the camera is mainly completed, so that the image shot by the camera is ensured not to deform.
Camera external parameters: including rotation matrices and translation vectors. Wherein the rotation matrix describes the direction of the coordinate axes of the world coordinate system relative to the coordinate axes of the camera coordinate system, and the translation matrix describes the position of the origin of the world coordinate system under the camera coordinate system. The rotation matrix and translation vector together describe the transformation relationship of the camera coordinate system and the world coordinate system.
The following is a summary of the concepts of embodiments of the application.
Current imaging technologies can be distinguished from 2D cameras and 3D cameras in the imaging dimension. The 2D camera projects an object in the real space onto the imaging chip in a projection mode by utilizing the principle of small-hole imaging, so that pixel data of the real object in an image are obtained. According to the imaging principle, the 3D cameras can be classified into binocular cameras, structured light cameras and Time of Flight (TOF) cameras, and the cameras all use their own camera coordinate system as the origin of coordinates to acquire three-dimensional data of real space.
In a system composed of a plurality of cameras, the internal parameters and the external parameters of the cameras are mostly determined by adopting a traditional calibration algorithm whether the cameras are 2d cameras or 3d cameras. The traditional calibration algorithm comprises Zhang Zhengyou calibration method and Tsai two-step method.
When the conventional calibration algorithm is used for calibrating the multiple cameras, a high-precision calibration plate (a checkerboard with black and white intervals) is often needed, and the calibration plate is manually or automatically controlled to do rigid motion in each imaging area of the multiple camera system, so that the calibration plate must appear in the common field of view of every two cameras. The method comprises the steps that a plurality of cameras collect calibration plate images simultaneously to obtain a plurality of calibration plate images, characteristic angular points in the calibration plate images are identified according to the plurality of calibration plate images, internal parameters of a single camera lens are respectively calibrated based on the identified characteristic angular points, and external parameters of each camera coordinate system relative to the calibration plate coordinate system are calibrated. The internal parameter calibration mainly comprises the steps of calibrating information such as focal length, distortion and the like of a lens, guaranteeing that a camera can restore a real scene well, and the external parameter calibration mainly comprises the step of calculating the conversion relation of each camera coordinate system relative to a calibration plate coordinate system. Because the calibration plate appears in the common field of view of the two cameras during each rigid motion, the coordinate conversion relationship between the two cameras can be obtained based on the same calibration plate coordinate system, thereby completing the calibration between a plurality of cameras.
The calibration mode based on the calibration plate is mainly used for the situation that the internal parameters of the multiple cameras are unknown. Because if the parameters in the camera are absent, the images shot by the camera are obviously deformed due to the influence of distortion and the like, and the calibration precision of the parameters outside the multiple cameras is further influenced. The calibration plate is used for calibration, the calibration plate needs to be controlled to move among different cameras, the visual angle is converted, and the calibration process is complicated; moreover, the characteristic corner recognition effect of the calibration plate image has a larger position relation with the illumination and the calibration plate in the visual field, so that the corner detection precision is low or the detection fails, and the calibration result is influenced.
In a three-dimensional reconstruction system with sparse view angles, unifying coordinate systems among multiple cameras is an indispensable link. Only after the coordinate system is completed, camera data of different visual angles can be converted into a coordinate system to complete three-dimensional reconstruction. The conversion relation of the camera coordinate systems with different visual angles is obtained by realizing the multi-camera coordinate system unification, namely, the camera external parameters are calibrated.
Based on the calibration mode of the calibration plate, cameras with different visual angles shoot the calibration plate in a public area, and the calibration plate with the public visual field is used as a reference coordinate system to finish the calibration of multiple cameras. However, in the calibration mode, a larger common field of view needs to exist between two cameras, so that the layout of the cameras is greatly limited, and the calibration mode is difficult to use in some scenes; moreover, the included angle of every two cameras is not too large, generally cannot exceed 90 degrees, otherwise, the inclination angle of the calibration plate in the visual field is very large, so that the extraction precision of the characteristic angular points is low, and the calibration result is influenced. Due to the high requirements of multi-camera deployment, multi-camera calibration is one of the more time-consuming tasks of multi-view three-dimensional reconstruction.
Based on the limitation of calibration of a calibration plate, the embodiment of the application provides a multi-3D camera joint calibration method, a polyhedral calibration object is used for replacing the calibration plate, two characteristics (including depth information and known internal parameters) of the 3D camera are fully utilized, depth images of the calibration object under different view angles are acquired, a point cloud data set is extracted from the depth images, feature matching is carried out on the point cloud data set extracted under each view angle and a point cloud data set of a pre-stored calibration object model, 6Dpose of each view angle relative to a reference model is calculated, a conversion relation between two phases is calibrated further based on 6Dpose of each view angle relative to the reference model, and the process can realize automatic calibration of the multi-3D camera without controlling the calibration object to carry out rigid motion; after preliminary calibration, in order to ensure the calibration precision of the two-by-two cameras, the embodiment of the application uses an iterative closest point (ITERATIVE CLOSEST POINT, ICP) algorithm to register the point cloud data sets of the two-by-two cameras, thereby improving the local alignment precision; and the global accumulated error of the data registration is optimized, so that the calibration precision between the 3D cameras is improved.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
FIG. 1 schematically illustrates a system architecture diagram provided by an embodiment of the present application; as shown in fig. 1, a plurality of 3D cameras 100 (including but not limited to 100_1-100_3) form a multi-camera calibration system, the plurality of 3D cameras are respectively disposed at a plurality of viewing angles of the calibration object 200, and the calibration object 200 should occupy more than 1/5 of the field of view of each 3D camera, so as to ensure that the depth image of the calibration object 200 acquired by the 3D cameras at each viewing angle contains enough point cloud data.
Based on the system architecture shown in fig. 1, when the plurality of 3D cameras 100 are jointly calibrated, the coordinate system of the plurality of 3D cameras may be unified with the coordinate system of the calibration object 200 as a reference coordinate system. Specifically, according to the point cloud data collected by each 3D camera and the point cloud data of the calibration object stored in advance, the conversion relation between each camera coordinate system and the calibration object coordinate system is determined, so that each 3D camera coordinate system is integrated under the calibration object coordinate system, and the joint calibration of a plurality of 3D cameras is completed.
In some embodiments, the coordinate system of the plurality of 3D cameras may be further unified under any one 3D camera coordinate system of the plurality of 3D cameras, so when the plurality of 3D cameras are deployed, a common field of view needs to exist between two adjacent cameras, that is, in fig. 1, two adjacent cameras may acquire the same position of the calibration object 200.
It should be noted that fig. 1 is only an example, and the number and type of 3D cameras are not limited by the embodiment of the present application, and may be two binocular cameras, one structured light camera, and one TOF camera, for example.
The calibration object used in the embodiment of the application can be a prefabricated object, and the three-dimensional model of the calibration object is prestored in calibration software. Wherein, the calibration object has at least the following characteristics:
(1) The calibration object is provided with a plurality of surfaces, in order to ensure that the point cloud data collected by each view angle is well matched with the point cloud data of the calibration object, the surface of the calibration object is smoothly deformed, and the shape characteristics of the different surfaces are different, so that the shape of the calibration object shot by each camera of each view angle is greatly different, and mismatching is prevented;
(2) In order to ensure that the cameras at each view angle can acquire enough point cloud data, the area occupied by the calibration object in each depth image is larger than a set threshold value, that is, the area of the calibration object in each camera view is larger than the set threshold value;
(3) The material of the calibration object is a rigid material, and the surface of the calibration object cannot be deformed;
(4) The surface of the calibration object is subjected to diffuse reflection processing, so that cameras at all visual angles cannot reflect light or absorb light due to mirror surfaces, and an effective depth image cannot be obtained.
Based on the calibration object, fig. 2 illustrates a flowchart of a method for jointly calibrating a plurality of 3D cameras provided by the embodiment of the present application, and as shown in fig. 2, the flowchart is executed by a calibration device, where calibration software is installed in the calibration device, so as to implement the calibration method in the embodiment of the present application, and the method mainly includes the following steps:
S201: and acquiring depth images of the calibration objects respectively acquired by the 3D cameras under different visual angles, and respectively extracting point cloud data sets corresponding to the depth images.
In S201, since the angles of view at which the respective 3D cameras are located are different, the shapes of the photographed calibration objects are different, and thus, the point cloud data sets extracted in each depth image are different.
Because camera calibration is to calculate the conversion relation between the coordinate systems of the 3D cameras, a common field of view is required to exist between the two 3D cameras, so that conversion is performed by taking the coordinate system of the calibration object in the common field of view as a reference, and therefore, the overlapping area of the calibration object exists in the depth images acquired by two cameras in the 3D cameras.
Alternatively, in order to ensure that there is an overlapping area between two cameras, two cameras may be adjacent two cameras.
S202: for any one of the extracted point cloud data sets, feature vectors of all the point cloud pairs in one point cloud data set are respectively matched with feature vectors of all the point cloud pairs in the point cloud data set of the three-dimensional model of the calibration object stored in advance, a plurality of transformation matrixes are determined, and clustering is carried out on the plurality of transformation matrixes to obtain a target transformation matrix of the point cloud data set.
At present, a common three-dimensional point cloud matching method comprises the following steps: a matching method based on point-to-feature (Point Pair Feature, PPF), a matching method based on iterative closest points, a matching method based on deep learning. The matching method based on the PPF and the matching method based on the closest point of iteration can be adopted.
In the embodiment of the application, aiming at the calibration system of the multi-3D camera, a special calibration object is designed, different surfaces of the calibration object have obvious shape characteristics, and the initial pose of each point cloud data set is unknown, so that a simpler PPF matching method can be adopted when S202 is executed.
It should be noted that the PPF matching algorithm is only an example, and a point feature histogram (Point Feature Histograms, PFH) algorithm, a fast point feature histogram (Fast Point Feature Histograms, FPFH) algorithm, and the like may also be employed.
In order to distinguish the description, point clouds in the point cloud data set extracted from each depth image are symmetrical to be a first point cloud pair, and point clouds in the point cloud data set of the pre-stored three-dimensional model are symmetrical to be a second point cloud pair.
In the embodiment of the application, data points in the point cloud data set extracted from each depth image represent three-dimensional information of a calibration object in real space, and the three-dimensional information is referenced by a 3D camera coordinate system of a corresponding visual angle. When executing S202, feature matching is performed on each first point cloud pair and each second point cloud pair, so that three-dimensional information of the point cloud data sets under different camera coordinate systems can be unified into one coordinate system.
Specifically, by using a PPF matching method, each point cloud data set is extracted from the depth image, and feature matching is performed on each point cloud data set and the point cloud data set of the three-dimensional model of the calibration object stored in the calibration software, so as to obtain a conversion relationship of the camera coordinate system of each view angle relative to the coordinate system of the calibration object, and the conversion relationship is recorded as w=h×w'. Wherein W is a calibration object coordinate system, W' is a camera coordinate system of each 3D camera, and H is a conversion relation from each camera coordinate system to the calibration object coordinate system. Since the calibration object does not need to make rigid motion in real space, W is consistent for all 3D cameras. By means of the matching mode, the conversion relation from each camera coordinate system to the calibration object coordinate system can be calculated. Through the conversion relation H, the conversion relation between any two camera coordinate systems can be obtained, and the alignment of a plurality of camera coordinate systems is realized, so that the calibration of a plurality of 3D cameras is completed.
In executing S202, for each point cloud data set, taking, as an example, a feature vector of any one of the first point cloud pairs (m 1, m 2) in the point cloud data set Ri, as shown in fig. 3, specifically includes:
First, a distance d (d= |m1-m2|) between a first data point m1 and a second data point m2, a first included angle α1 between a normal vector n1 of the first data point and a direction vector of the first data point and the second data point, a second included angle α2 between a normal vector n2 of the second data point and a direction vector of the first data point and the second data point, and a third included angle α3 between the normal vector n1 of the first data point and the normal vector n2 of the second data point are determined;
Then, determining a feature vector of the first point cloud pair (m 1, m 2) according to the distance d, the first included angle alpha 1, the second included angle alpha 2 and the third included angle alpha 3, wherein the feature vector is expressed as:
f (m 1, m 2) = (d, α1, α2, α3) formula 1
For the three-dimensional model of the calibration object stored in the calibration software in advance, extracting the feature vector of each second point cloud pair in the point cloud data set of the three-dimensional model in the same mode, and not repeating the feature vector.
From the representation mode of the feature vector of the point cloud pair in the formula 1, the feature of the distance and the normal included angle between two data points is not changed along with the transformation of a calibration object, has rotation translation invariance, and can be used as feature matching.
After extracting the feature vector of each first point cloud pair and the feature vector of each second point cloud pair in the point cloud data set Ri, matching the feature vector of each first point cloud pair with the feature vector of each second point cloud pair, wherein each first point cloud pair and the corresponding second point cloud pair can determine a transformation matrix, so that a plurality of transformation matrices can be determined after matching, and further, clustering the determined transformation matrices to obtain a target transformation matrix of the point cloud data set Ri, wherein the target transformation matrix is used for representing the conversion relation from a camera coordinate system under a corresponding view angle to a calibration object coordinate system.
The embodiment of the application does not have limiting requirements on the clustering algorithm, and comprises but is not limited to a k-means clustering algorithm, a mean shift clustering algorithm, a DBSCAN density clustering algorithm, a Gaussian mixture model clustering algorithm and a hierarchical clustering algorithm.
S203: and determining a calibration matrix between two cameras in the plurality of 3D cameras according to the target transformation matrix corresponding to each point cloud data set so as to realize joint calibration of the plurality of 3D cameras.
In S203, the plurality of 3D camera coordinate systems are aligned based on the target transformation matrix representing the conversion relationship from the camera coordinate system under each view angle to the calibration target coordinate system, and calibration of the plurality of 3D cameras is completed.
Taking a first camera, a second camera and a third camera of the plurality of 3D cameras as examples, a common field of view exists between the three cameras, and the same position of a calibration object can be shot, namely, an overlapped calibration object area exists in depth images acquired by the two cameras.
For example, as shown in fig. 4a, the first camera and the second camera may capture the same position of the calibration object, the second camera and the third camera may capture the same position of the calibration object, and the first camera and the third camera may capture the same position of the calibration object, and then the two cameras may be (first camera, second camera), (first camera, third camera), (second camera, and third camera).
Assuming that the conversion relationship (target transformation matrix) of the first camera coordinate system with respect to the calibration object coordinate system is w=h1×w1', the conversion relationship (target transformation matrix) of the second camera coordinate system with respect to the calibration object coordinate system is w=h2×w2', and the conversion relationship (target transformation matrix) of the third camera coordinate system with respect to the calibration object coordinate system is w=h3×w3', since the calibration object is not moving, the conversion relationship (calibration matrix) between the first camera coordinate system and the second camera coordinate system is: w1 '=h1 T h2×w2', the conversion relationship (calibration matrix) between the second camera coordinate system and the third camera coordinate is: w2 '=h T h3×w3', the conversion relationship (calibration matrix) between the first camera coordinate system and the third camera coordinate system is: w1 '=h1 T H3×w3'.
In some embodiments, when a plurality of 3D cameras are actually deployed, a common field of view may not be guaranteed to exist between two cameras, for example, as shown in fig. 4b, the first camera and the third camera may not be able to capture the same position of the calibration object, so, in order to guarantee that two cameras necessarily capture the same position of the calibration object, and the existing common field of view is large, two cameras may be two adjacent cameras, specifically, a (first camera, a second camera), (second camera, and a third camera), and at this time, since the third camera coordinate system and the second camera coordinate system may be mutually converted, the second camera coordinate system and the first camera coordinate system may also be mutually converted, so that a conversion relationship between the third camera coordinate system and the first camera coordinate system may be determined, and a three coordinate system may be realized.
The result of the joint calibration of the two-phase and two-phase is shown in fig. 5, the part (a) is a point cloud data set of a three-dimensional model of a pre-stored calibration object, and the point cloud data set of the calibration object in a real scene acquired by the two-phase and two-phase is shown in the part (b) after the two-phase and two-phase coordinate systems are matched through the PPF.
In some embodiments, since the imaging accuracy of a typical 3D camera is not high, after PPF matching is completed, each camera coordinate system is only initially aligned, and the alignment accuracy is not high enough. Therefore, the initial alignment result can be used as an initial pose, and the calibration matrix of the two-by-two cameras can be further optimized, namely the conversion relation between the coordinate systems of the two-by-two cameras is optimized. The conversion of the two-phase coordinate system can be realized through rotation and translation, namely the calibration matrix comprises a rotation matrix R and a translation vector t.
In an alternative embodiment, according to the distance between the data points in the point cloud data set corresponding to the two-phase camera, the ICP algorithm is adopted to perform point cloud matching, update the calibration matrix between the two-phase camera, and achieve local optimization of the calibration matrix between the two-phase camera.
The basic principle of the ICP algorithm is as follows: searching a nearest point pair (P i,qi) in a first point cloud data set to be matched and a P second point cloud data set Q according to a set constraint condition, and calculating optimal parameters R and t based on the nearest point pair (P i,qi) to minimize a pose error function, wherein the pose error function f (R, t) is as follows:
Where n is the number of nearest point pairs in the two point cloud data sets, p i is the ith data point (denoted as source data point) in the first point cloud data set, q i is the ith data point (denoted as target data point) in the second point cloud data set, R is the rotation matrix, and t is the translation vector.
In specific implementation, according to the first point cloud data set P corresponding to the first camera and the second point cloud data set Q corresponding to the second camera, the initial pose error function is adjusted in an iterative manner until a preset convergence condition is met, a target pose error function is obtained, and a target pose matrix determined by the target pose error function is used as an updated calibration matrix, and referring to fig. 6, each iteration process is as follows:
S2041: and selecting the nearest target data point from the second point cloud data set for each source data point in the first point cloud data set.
In S2041, for each source data point P i(pi e P in the first point cloud data set P, a target data point Q i(qi e Q closest to it in the second point cloud data set Q is determined, resulting in a plurality of nearest neighbor pairs (P i,qi). Wherein the distance between the nearest neighbor pairs (p i,qi) is less than a set threshold.
S2042: and determining a pose matrix according to each source data point and each corresponding target data point by adopting an initial pose error function.
In S2042, the rotation matrix R and the translation vector t determined by PPF matching are taken as an initial pose matrix, and an initial pose error corresponding to the initial pose matrix and a plurality of selected nearest neighbor point pairs (p i,qi) are taken into formula 2 to determine an intermediate pose matrix.
S2043: and updating each source data point in the first point cloud data set according to the determined pose matrix, and determining the average distance between each updated source data point and each target data point.
In S2043, based on the determined intermediate pose matrix, each source data point P i in the first point cloud data set P is subjected to rotation and translation transformation to obtain an updated source data point P' i, where the transformation formula is:
p' i=R*pi +t formula 3
After each updated source data point p 'i is obtained, the average distance between each source data point p' i and each target data point q i is calculated as follows:
s2044: and determining a target pose error function according to the average distance.
In S2044, an average distance is determinedComparing with the set distance threshold, if/>And if the position and orientation error function is smaller than the set distance threshold, the preset convergence condition is met, the corresponding position and orientation error function is minimum at the moment and can be used as the target position and orientation error function, otherwise, the method returns to S2041, and the position and orientation matrix is redetermined.
In some embodiments, when the maximum number of iterations is reached, the minimum pose error function is taken as the target pose error function, and the rotation matrix R and the translation vector t corresponding to the minimum pose error function are taken as the updated calibration matrix.
In other embodiments, when the number of the selected nearest neighbor pairs (p i,qi) is not increased any more, the minimum pose error function is used as the target pose error function, and the rotation matrix R and the translation vector t corresponding to the minimum pose error function are used as the updated calibration matrix.
After the calibration matrix is updated, the alignment result of the two-phase coordinate system is shown in fig. 7, (a) is the preliminary alignment result of the two-phase two-coordinate system after the PPF matching, and (b) is an optimal alignment result of the two-phase coordinate system based on ICP matching, and as can be seen from comparison of (a) and (b), the calibration error between the two-phase coordinate system after optimization is smaller.
And searching target data points meeting the distance requirement aiming at each source data point in each point cloud data set, obtaining each nearest neighbor point pair, and optimizing a rotation matrix and a translation vector based on each nearest neighbor point pair. And by means of continuous iterative solution, the pose error function is minimum, namely the distance between nearest neighbor point pairs is minimum, so that the relative position relation of the two-point cloud data sets is solved, and the optimization of the calibration matrix is completed.
In some embodiments, the calibration matrix between two phases is determined based on the ICP algorithm, and local accumulated errors may occur. In general, in three-dimensional reconstruction, the layout between each 3D camera may form a ring, so, after the calibration matrix between two cameras is optimized, in order to reduce local accumulated errors and improve the overall calibration accuracy of each 3D camera, loop detection may be performed on the calibration matrix between two cameras in the plurality of 3D cameras based on each point cloud data set, so as to reduce the global pose error of the calibration matrix between two cameras, ensure that the calibration error between two cameras is smaller than a set error threshold, and satisfy the calibration requirement.
In the embodiment of the application, the number of 3D cameras, the number of view angles and the number of extracted point cloud data sets are the same, the number of view angles is assumed to be K, a certain overlapping area is formed between two sets of the K point cloud data sets, meanwhile, the K point cloud data sets are unified into the same coordinate system through matching, further, based on the K point cloud data sets, the calibration matrix between the two phases is globally optimized, the purpose of global optimization is to determine a rotation matrix R i (i=1, 2..once, K) and a translation vector t i (i=1, 2..once, K) so that global pose errors e are minimum, at this time, the euclidean distance between the same-name point pairs in all the point cloud data sets is globally converged, and the corresponding rotation matrix R i and translation vector t i are optimal rigid body transformation parameters, namely, the overall calibration precision of each 3D camera is highest.
The global pose error e is formulated as follows:
Wherein m and N respectively represent point cloud data sets corresponding to an mth view angle and an nth view angle, K represents the number of view angles, N mn represents the number of identical point pairs in an overlapping area in the point cloud data sets corresponding to the mth view angle and the nth view angle, P mi and P ni respectively represent data points in the point cloud data sets of the mth view angle and the nth view angle of the ith pair of identical point pairs, R m and t m respectively represent a rotation matrix and a translation vector for converting the point cloud data sets of the mth view angle into a reference coordinate system, R n and t n respectively represent a rotation matrix and a translation vector for converting the point cloud data sets of the nth view angle into a reference coordinate system, the reference coordinate system is a coordinate system of a calibration object, the identical point pairs are two data points with a distance smaller than a set threshold value in the two point cloud data sets, and the identical position of the calibration object in a three-dimensional space photographed by the two 3D cameras can be intuitively understood.
In the embodiment of the application, after the coordinate systems of the plurality of 3D cameras are integrated into one coordinate system, the calibration results of the 3D cameras are globally optimized based on the point cloud data sets collected under each view angle, so that the local accumulated error between two cameras is reduced, and the overall calibration precision of each 3D camera is improved.
It is assumed that there are 4 3D cameras in the system, each camera does not belong to a different perspective, and that two adjacent cameras can take the same position of the calibration object. Based on the above embodiments, fig. 8 illustrates an overall frame diagram provided by an embodiment of the present application.
As shown in fig. 8, the system comprises an acquisition module, a matching module, a local optimization module and a global optimization module. The acquisition module is used for acquiring depth images of the calibration objects from all view angles and extracting a point cloud data set from each depth image; the matching module is used for loading a pre-stored three-dimensional model of the calibration object from calibration software, respectively carrying out feature matching on the extracted point cloud data set and the point cloud data set of the three-dimensional model by adopting a PPF algorithm, determining a target transformation matrix of each camera coordinate system converted to the calibration object coordinate system, and providing an initial pose matrix for the local optimization module by integrating a camera 2 coordinate system, a camera 3 coordinate system and a camera 4 coordinate system into a camera 1 coordinate system based on the target transformation matrix between two cameras; the local optimization module performs local optimization on calibration matrixes of the two cameras by utilizing overlapping parts in the point cloud data set shot by the two cameras and adopting an ICP algorithm; the global optimization module carries out loop detection on the calibration matrix between two cameras based on the point cloud data set acquired by each 3D camera, optimizes the global pose error of the calibration matrix between two cameras, reduces the accumulated error of local optimization, and improves the overall accuracy of calibration.
Based on the same technical idea, an embodiment of the present application provides a calibration device, see fig. 9, which comprises a processor 901, a memory 902 and at least one external communication interface 903; the processor 901, the memory 902, and the external communication interface 903 are all connected via a bus 904.
The memory 902 stores a computer program, and when the processor 901 executes the computer program, the method for jointly calibrating the plurality of 3D cameras is realized, and the same technical effects can be achieved.
The number of the processors 901 may be one or more, and the processors 901 and the memories 902 may be coupled or may be relatively independent.
Those of ordinary skill in the art will appreciate that: all or part of the steps of implementing the above method embodiments may be implemented by hardware associated with program instructions, and the above computer program may be stored in a computer readable storage medium, which when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or optical disk, or the like, which can store program codes.
Based on the same technical concept, an embodiment of the present application also provides a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform a method for jointly calibrating a plurality of 3D cameras as previously discussed.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. A method for joint calibration of a plurality of 3D cameras, comprising:
acquiring depth images of a calibration object respectively acquired by a plurality of 3D cameras under different visual angles, and respectively extracting point cloud data sets respectively corresponding to the depth images;
For any one of the extracted point cloud data sets, feature vectors of all the point cloud pairs in the one point cloud data set are respectively matched with feature vectors of all the point cloud pairs in the point cloud data set of the three-dimensional model of the calibration object, a plurality of transformation matrixes are determined, and the plurality of transformation matrixes are clustered to obtain a target transformation matrix of the one point cloud data set;
Determining a calibration matrix between two cameras in the plurality of 3D cameras according to the target transformation matrix corresponding to each point cloud data set so as to realize joint calibration of the plurality of 3D cameras;
For a first camera and a second camera in any two cameras in the plurality of 3D cameras, respectively performing the following operations: according to the first point cloud data set corresponding to the first camera and the second point cloud data set corresponding to the second camera, adopting an iterative mode to adjust an initial pose error function until a preset convergence condition is met, obtaining a target pose error function, and taking a target pose matrix determined by the target pose error function as an updated calibration matrix; wherein, each round of iterative process includes:
selecting target data points closest to each source data point in the first point cloud data set from the second point cloud data set;
Determining a pose matrix according to the source data points and the corresponding target data points by adopting an initial pose error function;
Updating each source data point in the first point cloud data set according to the determined pose matrix, and determining the average distance between each updated source data point and each target data point;
And determining a target pose error function according to the average distance.
2. The method of claim 1, wherein the method further comprises:
and performing loop detection on the calibration matrix between every two cameras in the plurality of 3D cameras based on each point cloud data set so as to reduce the global pose error of the calibration matrix between every two cameras.
3. The method of claim 2, wherein the global pose error is formulated as follows:
Wherein m and N respectively represent point cloud data sets corresponding to an mth view angle and an nth view angle, K represents the number of view angles, N mn represents the number of homonymous point pairs in an overlapping area in the point cloud data sets corresponding to the mth view angle and the nth view angle, P mi and P ni respectively represent data points in the point cloud data sets of the mth view angle and the nth view angle of the ith pair of homonymous point pairs, R m and t m respectively represent a rotation matrix and a translation vector for converting the point cloud data sets of the mth view angle into a reference coordinate system, and R n and t n respectively represent a rotation matrix and a translation vector for converting the point cloud data sets of the nth view angle into the reference coordinate system, wherein the reference coordinate system is the coordinate system of the calibration object.
4. The method of claim 1, wherein each point cloud pair comprises a first data point and a second data point, the feature vector of each point cloud pair in the point cloud data set being extracted by:
For any one of the point cloud pairs, the following operations are performed:
Determining a distance between the first data point and the second data point, a first angle between a normal vector of the first data point and a direction vector of the first data point and the second data point, a second angle between a normal vector of the second data point and a direction vector of the first data point and the second data point, and a third angle between a normal vector of the first data point and a normal vector of the second data point;
And determining the feature vector of the point cloud pair according to the distance, the first included angle, the second included angle and the third included angle.
5. The method of any of claims 2-4, wherein there is an overlapping region of calibration objects in the two-two camera acquired depth images.
6. The method according to any of claims 1-4, wherein the calibration object is provided with at least the following features:
the calibration object is provided with a plurality of surfaces, and the characteristics of different surfaces are different;
the area occupied by the calibration object in each depth image is larger than a set threshold value;
the material of the calibration object is a rigid material;
and each surface of the calibration object is subjected to diffuse reflection treatment.
7. The calibration device is characterized by comprising a processor, a memory and at least one external communication interface, wherein the processor, the memory and the external communication interface are all connected through a bus;
The external communication interface is configured to be connected with depth images of the calibration objects acquired by the plurality of 3D cameras under different visual angles;
The memory has stored therein a computer program, the processor being configured to perform the following operations based on the computer program:
acquiring depth images of a calibration object respectively acquired by a plurality of 3D cameras under different visual angles, and respectively extracting point cloud data sets respectively corresponding to the depth images;
For any one of the extracted point cloud data sets, feature vectors of all the point cloud pairs in the one point cloud data set are respectively matched with feature vectors of all the point cloud pairs in the point cloud data set of the three-dimensional model of the calibration object, a plurality of transformation matrixes are determined, and the plurality of transformation matrixes are clustered to obtain a target transformation matrix of the one point cloud data set;
Determining a calibration matrix between two cameras in the plurality of 3D cameras according to the target transformation matrix corresponding to each point cloud data set so as to realize joint calibration of the plurality of 3D cameras;
For a first camera and a second camera in any two cameras in the plurality of 3D cameras, respectively performing the following operations: according to the first point cloud data set corresponding to the first camera and the second point cloud data set corresponding to the second camera, adopting an iterative mode to adjust an initial pose error function until a preset convergence condition is met, obtaining a target pose error function, and taking a target pose matrix determined by the target pose error function as an updated calibration matrix; wherein, each round of iterative process includes:
selecting target data points closest to each source data point in the first point cloud data set from the second point cloud data set;
Determining a pose matrix according to the source data points and the corresponding target data points by adopting an initial pose error function;
Updating each source data point in the first point cloud data set according to the determined pose matrix, and determining the average distance between each updated source data point and each target data point;
And determining a target pose error function according to the average distance.
8. The calibration device of claim 7, wherein the processor is further configured to:
and performing loop detection on the calibration matrix between every two cameras in the plurality of 3D cameras based on each point cloud data set so as to reduce the global pose error of the calibration matrix between every two cameras.
CN202111091893.1A 2021-09-17 2021-09-17 Method and equipment for jointly calibrating multiple 3D cameras Active CN113870358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111091893.1A CN113870358B (en) 2021-09-17 2021-09-17 Method and equipment for jointly calibrating multiple 3D cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111091893.1A CN113870358B (en) 2021-09-17 2021-09-17 Method and equipment for jointly calibrating multiple 3D cameras

Publications (2)

Publication Number Publication Date
CN113870358A CN113870358A (en) 2021-12-31
CN113870358B true CN113870358B (en) 2024-05-24

Family

ID=78996412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111091893.1A Active CN113870358B (en) 2021-09-17 2021-09-17 Method and equipment for jointly calibrating multiple 3D cameras

Country Status (1)

Country Link
CN (1) CN113870358B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782438B (en) * 2022-06-20 2022-09-16 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN116758160B (en) * 2023-06-20 2024-04-26 哈尔滨工业大学 Method for detecting pose of optical element assembly process based on orthogonal vision system and assembly method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800103A (en) * 2012-06-18 2012-11-28 清华大学 Unmarked motion capturing method and device based on multi-visual angle depth camera
CN108447097A (en) * 2018-03-05 2018-08-24 清华-伯克利深圳学院筹备办公室 Depth camera scaling method, device, electronic equipment and storage medium
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system
CN112001926A (en) * 2020-07-04 2020-11-27 西安电子科技大学 RGBD multi-camera calibration method and system based on multi-dimensional semantic mapping and application
CN112381887A (en) * 2020-11-17 2021-02-19 广东电科院能源技术有限责任公司 Multi-depth camera calibration method, device, equipment and medium
CN112396663A (en) * 2020-11-17 2021-02-23 广东电科院能源技术有限责任公司 Visual calibration method, device, equipment and medium for multi-depth camera
CN113205560A (en) * 2021-05-06 2021-08-03 Oppo广东移动通信有限公司 Calibration method, device and equipment of multi-depth camera and storage medium
CN113345010A (en) * 2021-06-01 2021-09-03 北京理工大学 Multi-Kinect system coordinate calibration and conversion method based on improved ICP
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800103A (en) * 2012-06-18 2012-11-28 清华大学 Unmarked motion capturing method and device based on multi-visual angle depth camera
CN108447097A (en) * 2018-03-05 2018-08-24 清华-伯克利深圳学院筹备办公室 Depth camera scaling method, device, electronic equipment and storage medium
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system
CN112001926A (en) * 2020-07-04 2020-11-27 西安电子科技大学 RGBD multi-camera calibration method and system based on multi-dimensional semantic mapping and application
CN112381887A (en) * 2020-11-17 2021-02-19 广东电科院能源技术有限责任公司 Multi-depth camera calibration method, device, equipment and medium
CN112396663A (en) * 2020-11-17 2021-02-23 广东电科院能源技术有限责任公司 Visual calibration method, device, equipment and medium for multi-depth camera
CN113205560A (en) * 2021-05-06 2021-08-03 Oppo广东移动通信有限公司 Calibration method, device and equipment of multi-depth camera and storage medium
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data
CN113345010A (en) * 2021-06-01 2021-09-03 北京理工大学 Multi-Kinect system coordinate calibration and conversion method based on improved ICP

Also Published As

Publication number Publication date
CN113870358A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN112461230B (en) Robot repositioning method, apparatus, robot, and readable storage medium
CN113870358B (en) Method and equipment for jointly calibrating multiple 3D cameras
EP2236980B1 (en) A method for determining the relative position of a first and a second imaging device and devices therefore
CN102834845B (en) The method and apparatus calibrated for many camera heads
CN111553939B (en) Image registration algorithm of multi-view camera
CN109754459B (en) Method and system for constructing human body three-dimensional model
CN103945210A (en) Multi-camera photographing method for realizing shallow depth of field effect
CN111951201B (en) Unmanned aerial vehicle aerial image splicing method, device and storage medium
CN101853524A (en) Method for generating corn ear panoramic image by using image sequence
CN111107337B (en) Depth information complementing method and device, monitoring system and storage medium
CN107809610B (en) Camera parameter set calculation device, camera parameter set calculation method, and recording medium
JP2021520008A (en) Vehicle inspection system and its method
WO2021005977A1 (en) Three-dimensional model generation method and three-dimensional model generation device
KR101983586B1 (en) Method of stitching depth maps for stereo images
US9245375B2 (en) Active lighting for stereo reconstruction of edges
CN109949354B (en) Light field depth information estimation method based on full convolution neural network
CN115035235A (en) Three-dimensional reconstruction method and device
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN114494462A (en) Binocular camera ranging method based on Yolov5 and improved tracking algorithm
CN115682981A (en) Three-dimensional scanning method, device and system applied to microgravity environment
CN111951158B (en) Unmanned aerial vehicle aerial image splicing interruption recovery method, device and storage medium
Zhao et al. Mvpsnet: Fast generalizable multi-view photometric stereo
Yoon et al. Targetless multiple camera-LiDAR extrinsic calibration using object pose estimation
CN116012227A (en) Image processing method, device, storage medium and processor
CN113160389B (en) Image reconstruction method, device and storage medium based on characteristic line matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant