CN116681827A - Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion - Google Patents

Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion Download PDF

Info

Publication number
CN116681827A
CN116681827A CN202310636189.2A CN202310636189A CN116681827A CN 116681827 A CN116681827 A CN 116681827A CN 202310636189 A CN202310636189 A CN 202310636189A CN 116681827 A CN116681827 A CN 116681827A
Authority
CN
China
Prior art keywords
point cloud
camera
coordinate system
dimensional
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310636189.2A
Other languages
Chinese (zh)
Inventor
屈玉福
王贯宇
杨明
刘亦辰
陈从嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202310636189.2A priority Critical patent/CN116681827A/en
Publication of CN116681827A publication Critical patent/CN116681827A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • G01C1/02Theodolites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a defect-free three-dimensional point cloud reconstruction method and a device thereof based on multi-monitoring camera and point cloud fusion, wherein the method comprises the following steps: establishing a cubic mirror coordinate system by using a theodolite, and simultaneously measuring targets by using a plurality of monitoring cameras and the theodolite to indirectly obtain a conversion matrix of the coordinate system of the monitoring cameras and the cubic mirror coordinate system, so as to realize global calibration of the monitoring cameras; shooting an object picture by using the calibrated monitoring camera, and reconstructing a three-dimensional point cloud; and carrying out point cloud preprocessing on the plurality of point clouds, realizing point cloud coarse registration by using a camera global calibration result, realizing point cloud fine registration by using a traditional point cloud registration algorithm, and fusing the registered point clouds into a defect-free three-dimensional point cloud. The method has high global calibration precision of the camera, and the point cloud rough registration and the point cloud fine registration have high speed and high robustness, so that the three-dimensional reconstruction precision of the object can be greatly improved, and the object point cloud without defects is obtained.

Description

Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion
Technical Field
The application relates to the field of camera calibration and point cloud processing, in particular to a defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion.
Background
The object point cloud reconstruction has important significance in the fields of researching the surface morphology, the topography analysis, the three-dimensional visualization and the like of the object. The traditional object point cloud reconstruction method is completed based on a single camera or a pair of binocular cameras, and due to limited visual angles, the reconstructed point cloud has holes and defects; if the view angle shooting is selected to be replaced, a point cloud registration algorithm is needed to be used for registration, but most of the existing point cloud rough registration algorithms are not very high in accuracy when processing objects with unobvious characteristics, and the accurate registration is mostly dependent on rough registration results, so that high-precision and defect-free three-dimensional point clouds of the objects are not easy to stably obtain.
Different from the reconstruction of the three-dimensional point cloud by single camera mobile complement shooting, the three-dimensional reconstruction by using multiple monitoring cameras at the multi-view angle position has higher precision. In industry, global calibration of cameras using theodolites and cubic mirrors has the advantage of high precision, and is therefore often used to calibrate cameras used in aerospace projects. The global calibration of the camera is not required to be carried out every time, only one global calibration is required to be carried out after production and processing, and the calibration is carried out again in a regular period to correct errors, so that the cost of realizing the defect-free three-dimensional point cloud reconstruction method by using the multi-monitoring camera mainly comes from the camera, and the multi-camera system is widely used in a plurality of industrial fields.
Disclosure of Invention
The application provides a defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion, which aim to solve the problems of low three-dimensional reconstruction precision and defects of a single camera, have high global calibration precision of the camera, have high speed and high robustness in both coarse and fine point cloud registration, and can greatly improve the three-dimensional reconstruction precision of an object and obtain defect-free object point cloud.
According to some embodiments, the present application employs the following technical solutions:
a defect-free three-dimensional point cloud reconstruction method based on multi-monitoring camera and point cloud fusion comprises the following steps:
establishing a cube coordinate system by using a theodolite;
the target is measured by using a plurality of monitoring cameras and a theodolite at the same time, so that a conversion matrix of a coordinate system of the plurality of monitoring cameras and a coordinate system of a cube mirror is indirectly obtained, and the global calibration of the plurality of monitoring cameras is realized;
shooting an object picture by using the calibrated monitoring camera, and reconstructing a three-dimensional point cloud;
performing point cloud preprocessing on a plurality of point clouds;
the global calibration result of the camera is utilized to realize the rough registration of the point cloud;
the traditional point cloud registration algorithm is used for realizing point cloud fine registration; and
and fusing the registered point clouds into a defect-free three-dimensional point cloud.
Further, the angle of the two ends of a standard rod with known length is measured by using the theodolite, the theodolite and the cube mirror are subjected to auto-collimation, the angle is recorded, the angle of the cross hair on the corresponding surface of the cube mirror is measured, and the transformation matrix of the coordinate system of the theodolite and the coordinate system of the cube mirror is calculated.
Further, shooting targets by using multiple monitoring cameras to obtain internal and external parameters of the cameras, measuring the angle of the corner points of the target parts by using a theodolite, calculating a conversion matrix of a camera coordinate system and the theodolite coordinate system, and further calculating a conversion matrix of the camera coordinate system and a cube coordinate system, wherein the cube coordinate system is used as a world coordinate system.
Further, shooting an object by using a globally calibrated camera, reconstructing a three-dimensional point cloud in a binocular manner by using a parallax principle, and reconstructing the three-dimensional point cloud in a monocular manner by using an MVS method.
Further, the redundant part of the reconstructed point cloud is cut, the point cloud is filtered, and downsampling is performed.
Further, the preprocessed point cloud is subjected to coordinate transformation by utilizing a conversion matrix of a camera globally calibrated by the monitoring camera and a world coordinate system, so that the rough registration of the point cloud is realized.
Further, the AAICP algorithm with high speed and high precision is used for carrying out fine registration on the point cloud after coarse registration.
Further, the registered point clouds are combined into a large point cloud, repeated points are judged to be deleted, and the object three-dimensional point cloud without defects is obtained.
A defect-free three-dimensional point cloud reconstruction device based on multi-surveillance camera and point cloud fusion, comprising:
the theodolite three-dimensional coordinate measuring module is configured to measure any point angle of a space to calculate three-dimensional coordinates of the space;
the camera global calibration module is configured to obtain the internal and external parameters of each monitoring camera and a conversion matrix of a camera coordinate system and a world coordinate system;
the multi-monitoring camera three-dimensional point cloud fusion module is configured to perform point cloud preprocessing, realize point cloud coarse registration according to the result of global calibration of the cameras, perform point cloud fine registration by using a traditional algorithm, and finally fuse a plurality of point clouds into a complete and defect-free object three-dimensional point cloud.
Effects of the application
The method and the device are used for shooting from multiple view angles based on the multiple monitoring cameras, and the theodolite and the cube mirror are used for carrying out overall calibration on the multiple monitoring cameras, so that the method and the device have the advantage of high calibration precision, and the precision of the three-dimensional point cloud reconstructed by the calibrated cameras is very high. Meanwhile, the result of global calibration of the camera is utilized to perform coarse registration on the point cloud, and compared with the traditional point cloud registration algorithm, the method has higher precision, speed and robustness. The method uses Anderson acceleration ICP point cloud fine registration algorithm, and has the advantages of high registration accuracy, high registration speed and good robustness.
According to the application, the binocular camera is fixed in position, and the monocular camera is controlled to move by using a higher-precision mechanical arm. Therefore, the object can be moved at will, the cameras shoot simultaneously and then are processed in parallel, and the real-time performance is high after rapid point cloud registration and fusion. The user can move or replace the object along with the migration, and observe the defect-free three-dimensional point cloud of the object in real time on the user interface.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application.
FIG. 1 is a flow chart of a defect-free three-dimensional point cloud reconstruction method based on multi-monitor camera and point cloud fusion according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a three-dimensional coordinate system calculated by measuring angles of any points in space using two theodolites;
FIG. 3 is a schematic diagram of a transformation matrix for solving a theodolite coordinate system and a cube mirror coordinate system using two orthogonal planes of the theodolite and the cube mirror to auto-collimate and measure the corresponding face cross wire angles;
FIG. 4 is a graph of the relationship among the world coordinate system, the camera coordinate system, and the pixel coordinate system for targets when monocular camera parameters are calibrated;
FIG. 5 is a schematic diagram of distortion present when the camera is in use;
FIG. 6 is a flow chart of binocular camera stereo correction;
FIG. 7 is a comparison of (a) before (b) after stereo correction of the target image by the binocular camera;
FIG. 8 is a graph of epipolar geometry for a single-point cloud coordinate solution;
fig. 9 is a schematic diagram of a partitioning principle of KD-tree used in repeating point judgment in point cloud fusion;
fig. 10 is an effect diagram of realizing defect-free three-dimensional point cloud reconstruction of an object by using the defect-free three-dimensional point cloud reconstruction method based on multi-monitoring cameras and point cloud fusion provided by the application, wherein stone and soil are taken as examples.
Detailed Description
The conception, specific structure, and technical effects produced by the present application will be clearly and completely described below with reference to the embodiments and the drawings to fully understand the objects, features, and effects of the present application. It should be apparent that the embodiments described are only some, but not all, of the embodiments of the present project.
The existing method for reconstructing the three-dimensional point cloud of the object by using the camera has the defects of holes and the like due to the limitation of shooting view angles, or has poor effect on coarse registration of the reconstructed point cloud of the multiple view angles due to the fact that the characteristics of the object are not obvious, so that the high-precision defect-free three-dimensional point cloud of the object is difficult to obtain.
Aiming at the problems, the embodiment of the application measures the angles of two ends of a standard rod with known length by using the theodolite, performs auto-collimation on the theodolite and the cube mirror, records the angle at the moment, measures the angle of the cross hair on the corresponding surface of the cube mirror, and calculates the transformation matrix of the coordinate system of the theodolite and the coordinate system of the cube mirror; then shooting targets by utilizing a plurality of monitoring cameras to obtain internal and external parameters of the cameras, measuring the angle of the corner points of the target parts by utilizing a theodolite, calculating a conversion matrix of a camera coordinate system and the theodolite coordinate system, and further calculating the conversion matrix of the camera coordinate system and a world coordinate system; shooting an object by using a globally calibrated camera, reconstructing a three-dimensional point cloud in a binocular way by using a parallax principle, and reconstructing the three-dimensional point cloud in a monocular way by using an MVS method; cutting the redundant part of the reconstructed point cloud, filtering the point cloud, and downsampling; then, the preprocessed point cloud is subjected to coordinate transformation by utilizing a conversion matrix of a camera globally calibrated by the monitoring camera and a world coordinate system, so that coarse registration of the point cloud is realized; then, the AAICP algorithm with high speed and high precision is used for carrying out fine registration on the point cloud after coarse registration; and finally merging the registered point clouds into a large point cloud, judging repeated points, and deleting to obtain the defect-free object three-dimensional point cloud.
Example 1:
as shown in fig. 1, the defect-free three-dimensional point cloud reconstruction method based on multi-monitoring camera and point cloud fusion provided by the embodiment of the application comprises the following steps:
step 101, measuring angles of two ends of a standard rod with known length by using a theodolite, autocollimating the theodolite and a cube mirror, recording the angles, measuring angles of cross hairs on a corresponding surface of the cube mirror, and calculating a transformation matrix of a coordinate system of the theodolite and a coordinate system of the cube mirror, wherein the transformation matrix is specifically as follows:
(1) Measuring the angle at the moment by utilizing an auto-collimation laser built in the theodolite and auto-collimation of a reflecting plane of the cube mirror; then, the two theodolites are mutually aimed, so that the laser emitted by the two theodolites are mutually collimated, and the angle at the moment is measured as an initial angle;
(2) As shown in fig. 2, two theodolites can calculate three-dimensional coordinates by measuring any point angle in space, but the base line length of the two theodolites in the horizontal direction needs to be known first. The theodolite is used for measuring the angles of two ends of a standard rod with known length, and the angles are substituted into a three-dimensional coordinate solving equation and a distance equation L 2 =(x T2 -x T1 ) 2 +(y T2 -y T1 ) 2 +(z T2 -z T1 ) 2 The length of the base line of the theodolite can be solved, and then the three-dimensional coordinates of any point can be solved;
the theodolite baseline length b can be calculated by the following formula:
wherein L is the length of a horizontal base line of two theodolites, alpha A1 、β A1 、α A2 、β A2 The horizontal angle and the vertical angle of two end points of the standard rod are respectively.
After the theodolite base line length is obtained, the coordinates of any point in space under the theodolite coordinate system can be calculated by the following formula:
(3) As shown in fig. 3, according to the angle of the auto-collimation of the two surfaces of the theodolite and the cube mirror, the theodolite coordinate system is rotated, the x-axis of the theodolite coordinate system is rotated around the y-axis for a horizontal angle, and then rotated around the z-axis for a vertical angle, at this time, the x-axis is respectively parallel to the x-axis and the y-axis of the cube mirror coordinate system, so that the cube mirror coordinate system can be built, and the rotation matrix between the theodolite coordinate system and the cube mirror coordinate system can be solved:
wherein x is M 、y M 、z M Is the vector of three axes of the cubic mirror coordinate system under the theodolite coordinate system,the coordinate rotation matrix is from a cubic mirror coordinate system to a theodolite coordinate system.
(4) And measuring the cross hair angle of the cube mirror corresponding to the self-alignment surface by using the theodolite, combining the angle with a coordinate conversion equation, solving a translation vector between the coordinate system of the theodolite and the coordinate system of the cube mirror, and finally combining the rotation matrix and the translation vector to obtain a coordinate conversion matrix between the coordinate system of the theodolite and the coordinate system of the cube mirror.
Since the cube mirror is a cube with a known side length, it is assumed that half of the side length is a, and the theodolite T 1 And theodolite T 2 The corresponding cross points at the centers of the two surfaces of the cube mirror are P and Q respectively, as shown in FIG. 3, the homogeneous coordinates of P and Q in the coordinate system of the cube mirror are P respectively M =[a 0 0 1] T And Q M =[0 a 0 1] T . The coordinates of these two points in the theodolite coordinate system satisfy the following equation:
assume that the angles of the cross hairs of the two theodolites measuring the cube are (α APAP ) And (alpha) BQBQ ) Theodolite T 1 The following relationship is satisfied with the point P:
same-reason theodolite T 2 The following relationship is satisfied with the point Q:
wherein h is theodolite T 2 In the vertical direction with theodolite T 1 Distance between them, assuming theodolite T 1 With theodolite T 2 Theodolite T in mutual aiming 1 The angle in the vertical direction is beta AB H is solved as follows:
h=btanβ AB
the simultaneous coordinate transformation equation and the distance equation can obtain the following equation set, and the translation vector between the theodolite coordinate system and the cube coordinate system can be solved:
wherein x is PT 、x QT Respectively the coordinates of the X-axis direction of the cross points on the auto-collimation surfaces of the theodolite and the cube mirror under the theodolite coordinate system, T 1 、T 2 、T 3 Is an element in the translation vector to be solved.
102, shooting targets by using multiple monitoring cameras to obtain internal and external parameters of the cameras, measuring the angle of the corner points of the target part by using a theodolite, calculating a conversion matrix of a camera coordinate system and the theodolite coordinate system, and further calculating the conversion matrix of the camera coordinate system and a world coordinate system, wherein the method comprises the following steps of:
(1) Extracting target corner points in images by using targets with continuously changed shooting positions of each of the multiple monitoring cameras, and matching corner points of left and right images for the binocular cameras;
(2) Because the internal parameters of each camera are unchanged, the external parameters are changed along with the different target positions, the internal and external parameters of the cameras can be solved through an equation set established through a plurality of target pictures, and the relationship among a target world coordinate system, a camera coordinate system and a pixel coordinate system is shown in fig. 4;
the equation for each corner and internal parameter on the target can be established by:
wherein dx and dy respectively represent the real dimensions of a pixel on the CCD in the x and y directions, f is the focal length of the camera, s is the camera tilt coefficient, typically 0, u 0 And v 0 Is the coordinates of the main point of the camera in the pixel coordinate system, wherein f x 、f y 、u 0 、v 0 Parameters in the camera to be calibrated.
(3) For a binocular camera, after obtaining the internal and external parameters of a single camera, a coordinate transformation matrix between the left camera and the right camera is needed to be calculated simultaneously according to the matched angle positions;
because the left and right camera external parameter matrixes corresponding to each target image are obtained at the single target timing, the target is at any point P W The coordinate value of the point in the left and right camera coordinate systems can be obtained by the following formula:
wherein,,and->Coordinate rotation moments from a target world coordinate system to a left and right camera coordinate system respectivelyAn array and a coordinate translation matrix. Each image corresponds to the four matrix parameters, and the equations of the images are combined to obtain a coordinate transformation matrix between the left camera coordinate system and the right camera coordinate system, wherein the coordinate transformation matrix comprises the following components:
because of errors in monocular calibration, the R, T matrix solved by the left and right camera external parameter matrix corresponding to each image is not necessarily completely equal, so that errors are calculated through back projection, and the optimal solution of the R, T matrix corresponding to the minimum error is found through continuous iteration.
(4) When the camera is actually used, distortion as shown in fig. 5 exists, when the internal and external parameters of the camera are calibrated, the distortion coefficient of the camera can be obtained, then the image is required to be de-distorted, the pixel coordinates of the image are converted into coordinates under an imaging plane coordinate system according to the internal parameters, the modified coordinates are substituted into a distortion correction equation, a new pixel coordinate is calculated by the obtained result, and the non-integer coordinate is subjected to integer interpolation to obtain a distortion corrected image;
distortion correction can be achieved by:
wherein the radial coordinates(x u ,y u ) Coordinates of undistorted image point photographed by ideal camera on image plane, (x) d ,y d ) Is the coordinates of the actual image point on the image plane, (x) c ,y c ) Coordinates of a main point of the camera on an image plane; />For the distortion radius, k 1 、k 2 For radial distortion coefficient, p 1 、p 2 Is the tangential distortion coefficient.
(5) For the binocular camera, the left and right camera coordinate systems are required to be rotated respectively until all coordinate axes of the two coordinate systems are parallel to each other, and the coordinate planes are coplanar, specifically as follows:
after the three-dimensional calibration of the binocular camera is completed, a coordinate rotation matrix R and a coordinate translation matrix T between the left camera and the right camera can be obtained, as shown in fig. 6 (a); as shown in fig. 6 (b), the left camera coordinate system is rotated by half of R in the positive direction of the rotation matrix R, which is denoted as R h1 The method comprises the steps of carrying out a first treatment on the surface of the As shown in FIG. 6 (c), the right camera coordinate system is rotated by half R in the opposite direction of the rotation matrix R, which is denoted as R h2 The method comprises the steps of carrying out a first treatment on the surface of the After half rotation of the left and right cameras, the left and right camera coordinate systems are parallel but not coplanar as shown in fig. 6 (d); as shown in FIG. 6 (e), the coordinate relationship between the left and right camera coordinate systems of the spatial point P is P L =P R +R h2 T is a T; rotating the parallel but non-coplanar left and right camera coordinate systems by R rec To and correct the coordinate system O rec x rec y rec z rec Parallel, the left and right camera coordinate systems are made parallel and coplanar, x is as shown in FIG. 6 (f) rec The axis is parallel to the line direction of origin of the coordinate system of the left and right cameras, and x is the same as the line direction of origin of the coordinate system of the left and right cameras rec O rec z rec Plane and Z L O L O R Z R The planes are parallel.
Let t=r for the correction coordinate system h2 T, the projection of the coordinate axes of the correction coordinate system in the camera coordinate system can be established as follows:y rec =[0 0 1] T ×x rec ,z rec =x rec ×y rec thus, the coordinate rotation matrix between the coordinate system and the camera coordinate system is corrected to be R rec =[x rec y rec z rec ] T . The final left and right camera rotation matrix is:
the image after stereo correction of the binocular camera can be obtained by substituting all the coordinates of the image points of the left and right cameras into the above formula, and the result is shown in fig. 7, (a) the target image before stereo correction, and (b) the target image after stereo correction.
(6) Measuring partial angle of the corner of the target by using the theodolite, calculating the coordinate of the partial angle under the theodolite coordinate system, combining the coordinate with the coordinate of the corner under the camera coordinate system, which is obtained when the camera parameter is calibrated, and solving a coordinate transformation matrix between the camera coordinate system and the theodolite coordinate system by using an SVD (singular value decomposition) method;
constructing the following matrix for a plurality of corner points:
wherein,,and->The mean value of the coordinates of the m space points with known coordinates in the camera coordinate system and the theodolite coordinate system. SVD decomposition is carried out on H, so that two standard orthogonal matrixes and a diagonal matrix can be obtained:
U·Δ·V T =SVD(H)
wherein U is H.H T Normalized eigenvector of matrix, V is H T Normalized eigenvector of H matrix, delta is the sum of H.H T Matrix sum H T Diagonal matrix of H matrix eigenvalues.
Calculating determinant |V.U T The sign of I, if greater than 0, the coordinate rotation matrix isIf the symbol is smaller than 0, changing the symbol of any column of V, and substituting the symbol into the symbol to solve the coordinate rotation matrix. Then, the coordinate translation matrix can be solved to +.>
(7) Multiplying the coordinate transformation matrix between the camera coordinate system and the theodolite coordinate system by the coordinate transformation matrix between the theodolite coordinate system and the cubic mirror coordinate system to obtain the coordinate transformation matrix between the camera coordinate system and the cubic mirror coordinate system (world coordinate system), namelySo far the global calibration of the monitoring cameras is completed.
Step 103, shooting an object by using a globally calibrated camera, reconstructing a three-dimensional point cloud in a binocular manner by using a parallax principle, and reconstructing the three-dimensional point cloud in a monocular manner by using an MVS method, wherein the method comprises the following steps of:
(1) Shooting an object simultaneously by using all the globally calibrated monitoring cameras;
(2) For a binocular camera, solving a parallax map of left and right images by utilizing a parallax principle, and reconstructing a three-dimensional point cloud according to the parallax map;
(3) For a monocular camera, extracting image key points, performing key point matching, then reconstructing sparse point cloud by using the key points, and reconstructing dense point cloud by using an MVS method.
Step 104, cutting the redundant part of the reconstructed point cloud, filtering the point cloud, and performing downsampling, which is specifically as follows:
(1) Judging the main body part, setting a three-coordinate axis threshold or a spherical region, and cutting the redundant part;
(2) Removing outliers and noise points of the point cloud by using filtering methods such as bilateral filtering, gaussian filtering, conditional filtering, straight-through filtering, random sampling consistency (RANSAC) filtering and the like;
(3) In order to improve the point cloud registration speed, the point cloud is subjected to downsampling, the point cloud is divided into a plurality of voxel grids according to the equal width, the point closest to the center point is reserved in each voxel grid, and the rest points are removed, so that the point cloud point number is greatly reduced but the overall characteristics are unchanged.
Step 105, performing coordinate transformation on the preprocessed point cloud by using a transformation matrix of a camera globally calibrated by the monitoring camera and a world coordinate system, so as to realize rough registration of the point cloud, which is specifically as follows:
(1) Solving the coordinates of the monocular camera reconstruction point cloud under a monocular camera coordinate system according to the monocular camera shooting image;
for images taken at two different positions, the coordinate transformation matrix between the two camera positions is first solved, and the epipolar geometry relationship can be used for solving. As shown in fig. 8, C and C 'are camera optical centers of two camera positions, and image points of any point M in space on two camera image planes are M and M', respectively; due to any point M on CM 1 The image points on the image plane I are m, and the image points on the image plane I 'are on the epipolar line l' m In the same way, any point M ' on C ' M ' 1 The image points on the image plane I are all on the epipolar line l m Applying; and the epipolar lines on the two image planes are all certain to pass through the intersection points e and e 'of the camera optical center connecting line CC' and the corresponding image planes. Since m and m' are image points on two image planes respectively, the matching process between corresponding characteristic points in the two images can be changed from two dimensions to one dimension by utilizing the epipolar geometry, and the calculation amount and the time consumption of matching are reduced.
Let the coordinates of the spatial point M in the two-phase coordinate system be x= [ X y z ] respectively] T And X '= [ X' y 'z ]'] T The pixel coordinates in the two images are p= [ u v 1, respectively] T And p ' = [ u ' v ' 1] T The coordinates on the two normalized image planes are respectivelyAnd->The coordinate transformation matrix between the two camera coordinate systems satisfies x=rx' +t, and then satisfies:
since z and z' are unknown parameters, to eliminate their effects, T is simultaneously cross multiplied to the left at both ends of the above equation, resulting in:
then the two ends of the upper part are simultaneously multiplied by the left partThe method comprises the following steps:
due to vectorsPerpendicular to vectors T and->Thus->And->The dot product is 0, and thus can be obtained:
where e=t×r is an essential matrix. Considering that the image plane coordinate system and the pixel coordinate system are related by the internal reference matrix K, the pixel coordinate can be known according to the formula:
p T K -T T×RK -1 p′=p T Fp′=0
wherein f=k -T T×RK -1 Is a basic matrix. Since the internal reference matrix K is known, only six parameters in the coordinate transformation matrix need be solved. Using feature point matching algorithms in two imagesSearching matched characteristic points, constructing an equation of the formula by utilizing a plurality of pairs of characteristic points, and obtaining the coordinate transformation matrix parameters by a least square method.
After the coordinate transformation matrix is obtained, the depth of the space point can be estimated by using triangulation, so that the coordinates of the space point under the monocular camera coordinate system can be solved. Opposite typeTwo ends left side are simultaneously cross multiplied>The method can obtain the following steps:
substituting pixel coordinates into the above equation yields:
z′K -1 p×RK -1 p′+K -1 p×T=0
since only z 'is unknown in the above equation, z' can be solved by the above equation, and z can be solved by the same. The coordinates in the camera coordinate system can be calculated by satisfying zp=kx between the pixel coordinates and the coordinates in the camera coordinate system. The coordinates of the spatial point in the monocular camera coordinate system can be calculated by utilizing the epipolar geometry and triangulation.
(2) Solving the coordinates of the binocular camera reconstruction point cloud under a binocular camera coordinate system according to the photographed image of the binocular camera;
because the relative positions of the two cameras of the binocular camera are fixed, compared with the point cloud coordinate solving of the monocular camera, the solving step of the coordinate transformation matrix between the two cameras is omitted. The left and right eyes of the binocular camera meet the internal reference matrix equation, and the coordinate transformation matrix between the left and right cameras can be obtained by considering the matched angular points, so that after the characteristic points of the left and right images are matched, the following equation set can be obtained:
in the above, the unknowns have x l 、y l 、z l 、x r 、y r 、z r The total of six equations is seven, so that the coordinates of the space points under the binocular camera coordinate system can be directly solved.
(3) And (3) transforming the coordinates of the point clouds solved in the steps (1) and (2) under the respective camera coordinate systems by using a coordinate transformation matrix between the globally calibrated camera coordinate systems and the world coordinate system, and integrating the coordinates into the world coordinate system to realize coarse registration of the point clouds.
The precision of the rough registration in the step depends on the global calibration precision of the camera, and because the theodolite and the cube mirror are used for calibration, the calibration precision is very high, the corresponding point cloud rough registration precision is also higher, and a good condition is provided for the point cloud fine registration.
Step 106, performing fine registration on the roughly registered point cloud by using an AAICP algorithm with high speed and high precision, wherein the method comprises the following steps of:
(1) Randomly selecting point set P in source point cloud P i ∈P;
(2) Estimating a set of corresponding points Q from a cloud of target points Q i E Q, so that II Q i -p i ‖=min;
(3) Using SVD decomposition method to the two point sets to calculate coordinate transformation matrix between the two point clouds;
(4) P pair of i Transforming by using the coordinate transformation matrix calculated in (3) to obtain a new corresponding point set p' i ={p′ i =R·p i +T,p i ∈P};
(5) Calculation of p' i And q i Average distance of (2)
(6) Setting a distance threshold, stopping the iteration process if d is smaller than the given distance threshold or the iteration number is larger than the maximum iteration number, and calculating the coordinate transformation matrix as an optimal coordinate transformation matrix at the moment, otherwise, returning to (2) to continue iteration;
(7) The convergence process of ICP is accelerated according to Anderson acceleration thought, and the speed of point cloud registration is greatly improved under the condition of no loss of accuracy.
Step 107, merging the registered point clouds into a large point cloud, and judging repeated points to delete to obtain a defect-free object three-dimensional point cloud, wherein the method comprises the following steps of:
(1) Directly adding and summing the plurality of registered point clouds;
(2) Constructing a KD tree for the added point cloud, wherein the schematic diagram of the KD tree constructed by the three-dimensional points is shown in figure 9;
(3) Traversing all points in the point cloud one by one, for a certain point, starting from a root node, downwards accessing KD number, comparing the corresponding dimension value of the point with the corresponding dimension median of the point, if the dimension value is smaller than the median, accessing a left subtree, otherwise, accessing a right subtree until a leaf node is accessed, calculating the distance between the point and the data stored in the leaf node, and if the distance is smaller than a radius threshold, marking the point in the leaf node;
(4) Performing backtracking operation, accessing upwards from a leaf node, judging whether a node with a distance smaller than a threshold value exists, if so, entering the node, continuing to access downwards to the leaf node to mark a point with a distance smaller than the threshold value, and if the marked point is encountered in the subsequent traversal process, directly skipping the point;
(5) Deleting marked points in the large point cloud, only retaining unmarked points, and completing point cloud fusion.
Example 2:
a defect-free three-dimensional point cloud reconstruction device based on multi-surveillance camera and point cloud fusion, comprising:
the theodolite three-dimensional coordinate measuring module is configured to measure any point angle of a space to calculate three-dimensional coordinates of the space;
the camera global calibration module is configured to obtain the internal and external parameters of each monitoring camera and a conversion matrix of a camera coordinate system and a world coordinate system;
the multi-monitoring camera three-dimensional point cloud fusion module is configured to perform point cloud preprocessing, realize point cloud coarse registration according to the result of global calibration of the cameras, perform point cloud fine registration by using a traditional algorithm, and finally fuse a plurality of point clouds into a complete and defect-free object three-dimensional point cloud.
The effect of the method provided by the application is shown in fig. 10, wherein (a) and (b) are source point cloud and target point cloud, and (c) and (d) are top view and side view of the accurate registration result of AAICP point cloud. Compared with the traditional method, the method provided by the application has higher three-dimensional point cloud reconstruction precision and speed and better robustness. According to the application, the theodolite and the cube mirror are used for calibrating the multi-monitoring camera, so that the internal and external parameters of the camera are more accurately obtained; the calibration result is used for realizing the point cloud coarse registration, and compared with the traditional method which uses a point cloud coarse registration algorithm, the method has the advantages of higher speed and better robustness; and the AAICP algorithm is used for realizing more accurate and rapid point cloud fine registration, so that the real-time performance of the system is improved, and the method plays an important role in real-time reconstruction of object point clouds without defects, research of object surface morphology, topographic analysis, three-dimensional visualization and the like in industry.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. A defect-free three-dimensional point cloud reconstruction method based on multi-monitoring camera and point cloud fusion is characterized by comprising the following steps:
establishing a cube coordinate system by using a theodolite;
the target is measured by using a plurality of monitoring cameras and a theodolite at the same time, so that a conversion matrix of a coordinate system of the plurality of monitoring cameras and a coordinate system of a cube mirror is indirectly obtained, and the global calibration of the plurality of monitoring cameras is realized;
shooting an object picture by using the calibrated monitoring camera, and reconstructing a three-dimensional point cloud;
performing point cloud preprocessing on a plurality of point clouds;
the global calibration result of the camera is utilized to realize the rough registration of the point cloud;
the traditional point cloud registration algorithm is used for realizing point cloud fine registration; and
and fusing the registered point clouds into a defect-free three-dimensional point cloud.
2. A non-defective three-dimensional point cloud reconstruction method based on multi-monitor camera and point cloud fusion as defined in claim 1,
and measuring angles at two ends of a standard rod with known length by using the theodolite, performing auto-collimation on the theodolite and the cube mirror, recording the angles at the moment, measuring angles of cross wires on the corresponding surface of the cube mirror, and calculating a transformation matrix of a coordinate system of the theodolite and a coordinate system of the cube mirror.
3. A non-defective three-dimensional point cloud reconstruction method based on multi-monitor camera and point cloud fusion as defined in claim 2,
the method comprises the steps of shooting a target by using a plurality of monitoring cameras to obtain internal and external parameters of the cameras, measuring the angle of a corner point of a target part by using a theodolite, calculating a conversion matrix of a camera coordinate system and the theodolite coordinate system, and further calculating a conversion matrix of the camera coordinate system and a cube coordinate system, wherein the cube coordinate system is used as a world coordinate system.
4. A non-defective three-dimensional point cloud reconstruction method based on multi-monitor camera and point cloud fusion as defined in claim 1,
and shooting an object by using a globally calibrated camera, reconstructing a three-dimensional point cloud in a binocular way by using a parallax principle, and reconstructing the three-dimensional point cloud in a monocular way by using an MVS method.
5. The method for reconstructing a defect-free three-dimensional point cloud based on a multi-monitor camera and point cloud fusion of claim 4,
and cutting the redundant part of the reconstructed point cloud, filtering the point cloud, and downsampling.
6. The method for reconstructing a defect-free three-dimensional point cloud based on a multi-monitor camera and point cloud fusion of claim 5,
and carrying out coordinate transformation on the preprocessed point cloud by utilizing a conversion matrix of a camera globally calibrated by the monitoring camera and a world coordinate system, so as to realize coarse registration of the point cloud.
7. The method for reconstructing a defect-free three-dimensional point cloud based on a multi-monitor camera and point cloud fusion of claim 6,
and the AAICP algorithm with high speed and high precision is used for carrying out fine registration on the point cloud after coarse registration.
8. The method for reconstructing a defect-free three-dimensional point cloud based on multi-monitor camera and point cloud fusion of claim 7,
and merging the registered point clouds into a large point cloud, judging repeated points, and deleting to obtain the defect-free object three-dimensional point cloud.
9. A defect-free three-dimensional point cloud reconstruction device which is a defect-free three-dimensional point cloud reconstruction device based on the defect-free three-dimensional point cloud reconstruction method based on the fusion of a plurality of monitoring cameras and point clouds according to the claims 1-8,
the defect-free three-dimensional point cloud reconstruction device comprises:
the theodolite three-dimensional coordinate measuring module is configured to measure any point angle of a space to calculate three-dimensional coordinates of the space;
the camera global calibration module is configured to obtain the internal and external parameters of each monitoring camera and a conversion matrix of a camera coordinate system and a world coordinate system;
the multi-monitoring camera three-dimensional point cloud fusion module is configured to perform point cloud preprocessing, realize point cloud coarse registration according to the result of global calibration of the cameras, perform point cloud fine registration by using a traditional algorithm, and finally fuse a plurality of point clouds into a complete and defect-free object three-dimensional point cloud.
CN202310636189.2A 2023-05-31 2023-05-31 Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion Pending CN116681827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310636189.2A CN116681827A (en) 2023-05-31 2023-05-31 Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310636189.2A CN116681827A (en) 2023-05-31 2023-05-31 Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion

Publications (1)

Publication Number Publication Date
CN116681827A true CN116681827A (en) 2023-09-01

Family

ID=87780406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310636189.2A Pending CN116681827A (en) 2023-05-31 2023-05-31 Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion

Country Status (1)

Country Link
CN (1) CN116681827A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116862999A (en) * 2023-09-04 2023-10-10 华东交通大学 Calibration method, system, equipment and medium for three-dimensional measurement of double cameras
CN116863086A (en) * 2023-09-04 2023-10-10 武汉国遥新天地信息技术有限公司 Rigid body stable reconstruction method for optical motion capture system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116862999A (en) * 2023-09-04 2023-10-10 华东交通大学 Calibration method, system, equipment and medium for three-dimensional measurement of double cameras
CN116863086A (en) * 2023-09-04 2023-10-10 武汉国遥新天地信息技术有限公司 Rigid body stable reconstruction method for optical motion capture system
CN116863086B (en) * 2023-09-04 2023-11-24 武汉国遥新天地信息技术有限公司 Rigid body stable reconstruction method for optical motion capture system
CN116862999B (en) * 2023-09-04 2023-12-08 华东交通大学 Calibration method, system, equipment and medium for three-dimensional measurement of double cameras

Similar Documents

Publication Publication Date Title
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
JP6722323B2 (en) System and method for imaging device modeling and calibration
CN107945220B (en) Binocular vision-based reconstruction method
CN107977997B (en) Camera self-calibration method combined with laser radar three-dimensional point cloud data
US8121352B2 (en) Fast three dimensional recovery method and apparatus
CN116681827A (en) Defect-free three-dimensional point cloud reconstruction method and device based on multi-monitoring camera and point cloud fusion
WO2019100933A1 (en) Method, device and system for three-dimensional measurement
US20180051982A1 (en) Object-point three-dimensional measuring system using multi-camera array, and measuring method
CN114998499B (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN108416812B (en) Calibration method of single-camera mirror image binocular vision system
CN109727290B (en) Zoom camera dynamic calibration method based on monocular vision triangulation distance measurement method
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN113205592B (en) Light field three-dimensional reconstruction method and system based on phase similarity
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN112381847B (en) Pipeline end space pose measurement method and system
CN113205603A (en) Three-dimensional point cloud splicing reconstruction method based on rotating platform
CN104794718A (en) Single-image CT (computed tomography) machine room camera calibration method
CN111429571A (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
Barazzetti et al. Fisheye lenses for 3D modeling: Evaluations and considerations
CN111649694B (en) Implicit phase-parallax mapping binocular measurement missing point cloud interpolation method
Liu et al. Dense stereo matching strategy for oblique images that considers the plane directions in urban areas
Hongsheng et al. Three-dimensional reconstruction of complex spatial surface based on line structured light
CN113029108B (en) Automatic relative orientation method and system based on sequence sea surface images
CN111091595B (en) Strabismus three-dimensional mapping method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination