CN116681827A - A defect-free 3D point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion - Google Patents
A defect-free 3D point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion Download PDFInfo
- Publication number
- CN116681827A CN116681827A CN202310636189.2A CN202310636189A CN116681827A CN 116681827 A CN116681827 A CN 116681827A CN 202310636189 A CN202310636189 A CN 202310636189A CN 116681827 A CN116681827 A CN 116681827A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- camera
- coordinate system
- defect
- free
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000004927 fusion Effects 0.000 title claims abstract description 28
- 239000011159 matrix material Substances 0.000 claims abstract description 73
- 238000012544 monitoring process Methods 0.000 claims abstract description 20
- 238000006243 chemical reaction Methods 0.000 claims abstract description 10
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000009466 transformation Effects 0.000 claims description 38
- 238000005259 measurement Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 5
- 238000012937 correction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000001914 filtration Methods 0.000 description 7
- 238000013519 translation Methods 0.000 description 7
- 239000013598 vector Substances 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C1/00—Measuring angles
- G01C1/02—Theodolites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
技术领域technical field
本申请涉及相机标定和点云处理领域,尤其涉及一种基于多监视相机和点云融合的无缺陷三维点云重建方法及其装置。The present application relates to the field of camera calibration and point cloud processing, in particular to a defect-free three-dimensional point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion.
背景技术Background technique
物体点云重建对于研究物体表面形貌、地形分析、三维可视化等领域有着重要意义。传统的物体点云重建方法基于单个相机或者一对双目相机完成,由于视角受限,导致重建出的点云存在孔洞、缺陷;如果选择更换视角拍摄,就需要使用点云配准算法进行配准,而目前已有的大多数点云粗配准算法在处理特征不明显的物体时,正确率不是非常高,而精配准又大多数依赖于粗配准的结果,因此不易稳定得到高精度无缺陷的物体三维点云。Object point cloud reconstruction is of great significance for the study of object surface topography, terrain analysis, 3D visualization and other fields. The traditional object point cloud reconstruction method is based on a single camera or a pair of binocular cameras. Due to the limited viewing angle, the reconstructed point cloud has holes and defects; if you choose to change the viewing angle to shoot, you need to use the point cloud registration algorithm for registration. However, most existing point cloud coarse registration algorithms do not have a very high accuracy rate when dealing with objects with inconspicuous features, and most of the fine registration depends on the results of coarse registration, so it is not easy to obtain high accuracy stably. 3D point cloud of objects with flawless precision.
不同于单一相机移动补拍重建三维点云,使用多监视相机在多视角位置下进行三维重建具有更高的精度。在工业上,利用经纬仪和立方镜实现的相机全局标定具有高精度的优势,因此经常用于标定航空航天项目使用的相机。由于相机全局标定无需每次使用时都进行,只需要在生产加工后进行一次全局标定,在定期内再次标定以校正误差即可,因此使用多监视相机实现无缺陷三维点云重建方法的成本主要来源于相机,这使得多相机系统在众多工业领域广泛使用。Different from the reconstruction of 3D point cloud by moving supplementary shots with a single camera, the 3D reconstruction using multiple surveillance cameras at multiple viewing angles has higher accuracy. In industry, the camera global calibration using theodolite and cube mirror has the advantage of high precision, so it is often used to calibrate the camera used in aerospace projects. Since the global calibration of the camera does not need to be performed every time it is used, it only needs to be calibrated once after production and processing, and then calibrated again in a regular period to correct the error. Originating from cameras, this makes multi-camera systems widely used in numerous industrial fields.
发明内容Contents of the invention
本申请提供了一种基于多监视相机和点云融合的无缺陷三维点云重建方法及其装置,其目的是为了解决单相机三维重建精度不高且存在缺陷的问题,具有很高的相机全局标定精度,且点云粗配准和精配准都具有较快的速度和较高的鲁棒性,能大大提升物体三维重建精度,获得无缺陷的物体点云。This application provides a defect-free 3D point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion. Calibration accuracy, and point cloud coarse registration and fine registration have faster speed and higher robustness, which can greatly improve the accuracy of 3D reconstruction of objects and obtain defect-free object point clouds.
根据一些实施例,本申请采用如下技术方案:According to some embodiments, the present application adopts the following technical solutions:
一种基于多监视相机和点云融合的无缺陷三维点云重建方法,包括:A defect-free 3D point cloud reconstruction method based on multi-surveillance cameras and point cloud fusion, including:
利用经纬仪建立立方镜坐标系;Use the theodolite to establish the cubic mirror coordinate system;
用多监视相机和经纬仪同时测量靶标,间接获得多监视相机坐标系与立方镜坐标系的转换矩阵,实现多监视相机全局标定;Use multiple monitoring cameras and theodolite to measure the target at the same time, and indirectly obtain the transformation matrix between the coordinate system of the multiple monitoring cameras and the coordinate system of the cubic mirror, and realize the global calibration of the multiple monitoring cameras;
利用标定后的监视相机拍摄物体图片,并重建三维点云;Use the calibrated surveillance camera to take pictures of objects and reconstruct 3D point clouds;
对多个点云进行点云预处理;Perform point cloud preprocessing on multiple point clouds;
利用相机全局标定结果实现点云粗配准;Use the camera global calibration results to achieve coarse point cloud registration;
使用传统点云配准算法实现点云精配准;和Use traditional point cloud registration algorithms to achieve fine point cloud registration; and
将配准后的点云融合成一个无缺陷三维点云。The registered point clouds are fused into a defect-free 3D point cloud.
进一步地,利用经纬仪测量一根长度已知标准杆两端角度,并将经纬仪与立方镜进行自准直,记录此时角度,再测量立方镜对应面上十字叉丝的角度,计算经纬仪坐标系与立方镜坐标系的转换矩阵。Further, use the theodolite to measure the angle at both ends of a standard rod with a known length, and self-align the theodolite with the cube mirror, record the angle at this time, then measure the angle of the crosshairs on the corresponding surface of the cube mirror, and calculate the coordinate system of the theodolite Transformation matrix to and from the cubic mirror coordinate system.
进一步地,利用多监视相机拍摄靶标获得相机内外参数,再用经纬仪测量靶标部分角点角度,计算相机坐标系与经纬仪坐标系的转换矩阵,进而计算相机坐标系与立方镜坐标系的转换矩阵,其中立方镜坐标系作为世界坐标系。Further, the internal and external parameters of the camera are obtained by shooting the target with multiple surveillance cameras, and then the angle of some corners of the target is measured with the theodolite, and the conversion matrix between the camera coordinate system and the theodolite coordinate system is calculated, and then the conversion matrix between the camera coordinate system and the cubic mirror coordinate system is calculated, The cubic mirror coordinate system is used as the world coordinate system.
进一步地,利用全局标定后的相机拍摄物体,使用视差原理双目重建三维点云,使用MVS方法单目重建三维点云。Further, the globally calibrated camera is used to shoot the object, the parallax principle is used to reconstruct the 3D point cloud binocularly, and the MVS method is used to reconstruct the 3D point cloud monocularly.
进一步地,将重建点云的多余部分裁剪,对点云进行滤波,并进行降采样。Further, the redundant part of the reconstructed point cloud is cropped, the point cloud is filtered, and down-sampled.
进一步地,利用监视相机全局标定出的相机与世界坐标系的转换矩阵对预处理后的点云进行坐标变换,实现点云粗配准。Furthermore, the coordinate transformation of the preprocessed point cloud is carried out by using the transformation matrix between the camera and the world coordinate system calibrated globally by the surveillance camera, and the coarse registration of the point cloud is realized.
进一步地,使用速度很快、精度很高的AAICP算法对粗配准后的点云进行精配准。Further, the AAICP algorithm with high speed and high precision is used to fine-register the point cloud after rough registration.
进一步地,将配准后的点云合并成一个大点云,并判断重复点进行删除,得到无缺陷的物体三维点云。Further, the registered point clouds are merged into a large point cloud, and duplicate points are judged to be deleted to obtain a defect-free 3D point cloud of the object.
一种基于多监视相机和点云融合的无缺陷三维点云重建装置,包括:A defect-free 3D point cloud reconstruction device based on multi-surveillance cameras and point cloud fusion, including:
经纬仪三维坐标测量模块,被配置为,测量空间任意点角度解算其三维坐标;The three-dimensional coordinate measurement module of the theodolite is configured to measure the angle of any point in the space and calculate its three-dimensional coordinates;
相机全局标定模块,被配置为,获得各监视相机内外参数和相机坐标系与世界坐标系的转换矩阵;The camera global calibration module is configured to obtain the internal and external parameters of each monitoring camera and the conversion matrix between the camera coordinate system and the world coordinate system;
多监视相机三维点云融合模块,被配置为,进行点云预处理,并根据相机全局标定的结果实现点云粗配准,再使用传统算法进行点云精配准,最后将多个点云融合成一个完整无缺陷的物体三维点云。The multi-camera 3D point cloud fusion module is configured to perform point cloud preprocessing, and realize coarse point cloud registration according to the results of camera global calibration, then use traditional algorithms for fine point cloud registration, and finally combine multiple point clouds Fusion into a complete and defect-free object 3D point cloud.
发明效果Invention effect
本申请基于多监视相机从多视角进行拍摄,并使用经纬仪与立方镜对多监视相机进行全局标定,具有标定精度高的优势,标定后的相机重建出的三维点云精度也非常高。同时,利用相机全局标定的结果对点云进行粗配准,与传统点云配准算法相比,具有更高的精度、速度和鲁棒性。本申请使用Anderson acceleration ICP点云精配准算法,具有配准精度高、配准速度快、鲁棒性好的优势。This application is based on multi-surveillance cameras shooting from multiple perspectives, and uses theodolite and cube mirrors to globally calibrate the multi-surveillance cameras, which has the advantage of high calibration accuracy, and the accuracy of the 3D point cloud reconstructed by the calibrated cameras is also very high. At the same time, using the results of camera global calibration for coarse registration of point clouds, compared with traditional point cloud registration algorithms, it has higher accuracy, speed and robustness. This application uses the Anderson acceleration ICP point cloud fine registration algorithm, which has the advantages of high registration accuracy, fast registration speed and good robustness.
本申请中双目相机位置固定,单目相机使用较高精度机械臂控制移动。因此物体可以随意移动,多相机同时拍摄后并行处理,再经过快速的点云配准和融合,实时性较高。使用者可以随迁移动或者更换物体,并在用户界面实时观察物体无缺陷的三维点云。In this application, the position of the binocular camera is fixed, and the movement of the monocular camera is controlled by a high-precision mechanical arm. Therefore, the object can move at will, multi-camera shooting at the same time and then parallel processing, and then through fast point cloud registration and fusion, the real-time performance is high. Users can move or replace objects along the way, and observe the defect-free 3D point cloud of the object in real time on the user interface.
附图说明Description of drawings
构成本申请的一部分的说明书附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。The accompanying drawings constituting a part of the present application are used to provide further understanding of the present application, and the schematic embodiments and descriptions of the present application are used to explain the present application, and do not constitute an improper limitation of the present application.
图1为本申请一实施例提供的基于多监视相机和点云融合的无缺陷三维点云重建方法的流程图;Fig. 1 is the flow chart of the defect-free three-dimensional point cloud reconstruction method based on multi-surveillance cameras and point cloud fusion provided by an embodiment of the present application;
图2为使用两台经纬仪测量空间任意点角度解算其三维坐标系的原理示意图;Fig. 2 is the schematic diagram of the principle of using two theodolites to measure the angle of any point in space to solve its three-dimensional coordinate system;
图3为使用两台经纬仪与立方镜两个相互垂直平面自准直并测量对应面十字叉丝角度以解算经纬仪坐标系与立方镜坐标系的转换矩阵的原理示意图;Fig. 3 is to use two theodolites and cube mirror two mutual vertical plane self-collimation and measure corresponding surface crosshair angle to solve the schematic diagram of the principle of the conversion matrix of theodolite coordinate system and cube mirror coordinate system;
图4为单目相机参数标定时靶标世界坐标系、相机坐标系和像素坐标系之间的关系图;Figure 4 is a diagram of the relationship between the target world coordinate system, the camera coordinate system and the pixel coordinate system when the monocular camera parameters are calibrated;
图5为相机使用时存在的畸变示意图;Figure 5 is a schematic diagram of the distortion that exists when the camera is used;
图6为双目相机立体校正的流程图;Fig. 6 is a flow chart of binocular camera stereo calibration;
图7为双目相机对靶标图像进行立体校正前(a)后(b)的对比图;Fig. 7 is a comparison diagram before (a) and (b) after stereo correction of the target image by the binocular camera;
图8为单目点云坐标求解时的对极几何关系图;Figure 8 is a diagram of the epipolar geometric relationship when solving the coordinates of the monocular point cloud;
图9为点云融合中重复点判断时使用的KD树的划分原理示意图;Fig. 9 is a schematic diagram of the division principle of the KD tree used when judging repeated points in point cloud fusion;
图10为使用本申请提供的基于多监视相机和点云融合的无缺陷三维点云重建方法实现物体无缺陷三维点云重建的效果图,其中以石块和土壤作为实例。Fig. 10 is an effect diagram of defect-free 3D point cloud reconstruction of objects using the defect-free 3D point cloud reconstruction method based on multi-surveillance cameras and point cloud fusion provided by this application, in which stones and soil are taken as examples.
具体实施方式Detailed ways
以下将结合实施例和附图对本申请的构思、具体结构及产生的技术效果进行清楚、完整地描述,以充分地理解本申请的目的、特征和效果。显然,所描述的实施例只是本项目的一部分实施例,而不是全部实施例。The concept, specific structure and technical effects of the present application will be clearly and completely described below in conjunction with the embodiments and drawings, so as to fully understand the purpose, features and effects of the present application. Apparently, the described embodiments are only some of the embodiments of this project, not all of them.
目前利用相机重建物体三维点云的方法由于拍摄视角限制存在孔洞等缺陷,或者由于物体特征不明显导致多视角重建出点云在粗配准时效果较差,从而导致较难获得高精度无缺陷的物体三维点云。At present, the method of using the camera to reconstruct the 3D point cloud of the object has defects such as holes due to the limitation of the shooting angle, or the point cloud reconstructed from multiple perspectives is not effective in rough registration due to the inconspicuous characteristics of the object, which makes it difficult to obtain high-precision and defect-free images. Object 3D point cloud.
针对上述问题,本申请实施例通过利用经纬仪测量一根长度已知标准杆两端角度,并将经纬仪与立方镜进行自准直,记录此时角度,再测量立方镜对应面上十字叉丝的角度,计算经纬仪坐标系与立方镜坐标系的转换矩阵;接着利用多监视相机拍摄靶标获得相机内外参数,再用经纬仪测量靶标部分角点角度,计算相机坐标系与经纬仪坐标系的转换矩阵,进而计算相机坐标系与世界坐标系的转换矩阵;然后利用全局标定后的相机拍摄物体,使用视差原理双目重建三维点云,使用MVS方法单目重建三维点云;再将重建点云的多余部分裁剪,对点云进行滤波,并进行降采样;接着利用监视相机全局标定出的相机与世界坐标系的转换矩阵对预处理后的点云进行坐标变换,实现点云粗配准;然后使用速度很快、精度很高的AAICP算法对粗配准后的点云进行精配准;最后将配准后的点云合并成一个大点云,并判断重复点进行删除,得到无缺陷的物体三维点云。For the problems referred to above, the embodiment of the present application is by utilizing the theodolite to measure the angle at both ends of a known standard rod, and the theodolite and the cube mirror are self-collimated, record the angle at this time, and then measure the crosshairs on the corresponding surface of the cube mirror. Angle, calculate the transformation matrix between the theodolite coordinate system and the cubic mirror coordinate system; then use the multi-surveillance camera to shoot the target to obtain the internal and external parameters of the camera, and then use the theodolite to measure the corner angle of the target part, calculate the transformation matrix between the camera coordinate system and the theodolite coordinate system, and then Calculate the transformation matrix between the camera coordinate system and the world coordinate system; then use the globally calibrated camera to shoot the object, use the parallax principle to reconstruct the 3D point cloud binocularly, and use the MVS method to reconstruct the 3D point cloud monocularly; then reconstruct the redundant part of the point cloud Crop, filter the point cloud, and perform downsampling; then use the transformation matrix between the camera and the world coordinate system calibrated globally by the surveillance camera to perform coordinate transformation on the preprocessed point cloud to achieve coarse registration of the point cloud; then use the speed The fast and high-precision AAICP algorithm performs fine registration on the point cloud after rough registration; finally, the registered point cloud is merged into a large point cloud, and duplicate points are judged to be deleted to obtain a three-dimensional object without defects point cloud.
实施例1:Example 1:
如图1所示,本申请实施例提供的基于多监视相机和点云融合的无缺陷三维点云重建方法包括如下步骤:As shown in Figure 1, the defect-free 3D point cloud reconstruction method based on multi-surveillance cameras and point cloud fusion provided by the embodiment of the present application includes the following steps:
步骤101,利用经纬仪测量一根长度已知标准杆两端角度,并将经纬仪与立方镜进行自准直,记录此时角度,再测量立方镜对应面上十字叉丝的角度,计算经纬仪坐标系与立方镜坐标系的转换矩阵,具体如下:Step 101, use the theodolite to measure the angles at both ends of a standard rod with known length, and self-align the theodolite with the cube mirror, record the angle at this time, then measure the angle of the crosshairs on the corresponding surface of the cube mirror, and calculate the coordinate system of the theodolite The transformation matrix with the cubic mirror coordinate system is as follows:
(1)利用经纬仪内置自准直激光器与立方镜反射平面自准直,测量此时角度;再用两台经纬进行互瞄,使得两经纬仪射出的激光相互准直,测量此时角度作为初始角;(1) Use the built-in self-collimating laser of the theodolite to self-collimate with the reflective plane of the cube mirror, and measure the angle at this time; then use two latitude and longitude for mutual aiming, so that the lasers emitted by the two theodolites are mutually collimated, and measure the angle at this time as the initial angle ;
(2)如图2所示,两台经纬仪可以通过测量空间任意一点角度来解算三维坐标,但首先需要知道两经纬仪水平方向的基线长度。利用经纬仪测量一根长度已知的标准杆两端的角度,将其代入三维坐标解算方程和距离方程L2=(xT2-xT1)2+(yT2-yT1)2+(zT2-zT1)2就能解出经纬仪基线长度,进而可以解算任意一点三维坐标;(2) As shown in Figure 2, the two theodolites can calculate the three-dimensional coordinates by measuring the angle of any point in space, but first, the baseline length of the two theodolites in the horizontal direction needs to be known. Use the theodolite to measure the angle at both ends of a standard rod with known length, and substitute it into the three-dimensional coordinate solution equation and the distance equation L 2 =(x T2 -x T1 ) 2 +(y T2 -y T1 ) 2 +(z T2 -z T1 ) 2 can solve the baseline length of the theodolite, and then can solve the three-dimensional coordinates of any point;
通过如下公式可以计算出经纬仪基线长度b:The theodolite baseline length b can be calculated by the following formula:
其中,L为两台经纬仪水平方向基线长度,αA1、βA1、αA2、βA2分别为标准杆两端点的水平角和竖直角。Among them, L is the length of the baseline of the two theodolites in the horizontal direction, and α A1 , β A1 , α A2 , and β A2 are the horizontal and vertical angles of the two ends of the standard bar, respectively.
获得经纬仪基线长度后,就可以通过下式计算出空间任意一点在经纬仪坐标系下的坐标:After obtaining the baseline length of the theodolite, the coordinates of any point in space in the theodolite coordinate system can be calculated by the following formula:
(3)如图3所示,根据经纬仪与立方镜两个面自准直后的角度,对经纬仪坐标系进行旋转,将经纬仪坐标系的x轴先绕y轴旋转水平角,再绕z轴旋转竖直角,此时x轴分别与立方镜坐标系的x、y轴平行,即可建立立方镜坐标系,也可以求解经纬仪坐标系与立方镜坐标系之间的旋转矩阵:(3) As shown in Figure 3, according to the angle after self-collimation of the two surfaces of the theodolite and the cube mirror, the theodolite coordinate system is rotated, and the x-axis of the theodolite coordinate system is first rotated around the y-axis by a horizontal angle, and then around the z-axis Rotate the vertical angle, at this time, the x-axis is parallel to the x-axis and y-axis of the cubic mirror coordinate system respectively, the cubic mirror coordinate system can be established, and the rotation matrix between the theodolite coordinate system and the cubic mirror coordinate system can also be solved:
其中,xM、yM、zM为立方镜坐标系的三个轴在经纬仪坐标系下的向量,为立方镜坐标系到经纬仪坐标系的坐标旋转矩阵。Among them, x M , y M , z M are the vectors of the three axes of the cubic mirror coordinate system in the theodolite coordinate system, is the coordinate rotation matrix from the cubic mirror coordinate system to theodolite coordinate system.
(4)利用经纬仪测量立方镜对应自准直面上的十字叉丝角度,并将其与坐标转换方程联立,就能够求解经纬仪坐标系与立方镜坐标系之间的平移向量,最后将旋转矩阵和平移向量结合即可得到经纬仪坐标系与立方镜坐标系之间的坐标转换矩阵。(4) Use the theodolite to measure the crosshair angle of the cube mirror corresponding to the self-collimation surface, and combine it with the coordinate transformation equation, the translation vector between the theodolite coordinate system and the cube mirror coordinate system can be solved, and finally the rotation matrix The coordinate transformation matrix between the theodolite coordinate system and the cubic mirror coordinate system can be obtained by combining with the translation vector.
由于立方镜是边长已知的正方体,假设其边长的一半为a,与经纬仪T1和经纬仪T2相对应的立方镜两个面中心的十字叉丝点分别为P和Q,如图3所示,则P和Q在立方镜坐标系下的齐次坐标分别为PM=[a 0 0 1]T和QM=[0 a 0 1]T。则这两点在经纬仪坐标系下的坐标满足下式:Since the cubic mirror is a cube whose side length is known, assuming that half of its side length is a, the crosshair points of the two surface centers of the cube mirror corresponding to theodolite T 1 and theodolite T 2 are respectively P and Q, as shown in the figure 3, then the homogeneous coordinates of P and Q in the cubic mirror coordinate system are PM =[a 0 0 1] T and Q M =[0 a 0 1] T respectively. Then the coordinates of these two points in the theodolite coordinate system satisfy the following formula:
假设两台经纬仪测量立方镜十字叉丝的角度分别为(αAP,βAP)和(αBQ,βBQ),经纬仪T1与点P满足如下关系:Assuming that two theodolites measure the angles of the crosshairs of the cube mirror as (α AP , β AP ) and (α BQ , β BQ ), respectively, theodolite T 1 and point P satisfy the following relationship:
同理经纬仪T2与点Q满足如下关系:Similarly, theodolite T 2 and point Q satisfy the following relationship:
其中,h为经纬仪T2在竖直方向与经纬仪T1之间的距离,假设经纬仪T1与经纬仪T2互瞄时经纬仪T1竖直方向的角度为βAB,则h求解如下:Wherein, h is the distance between theodolite T 2 and theodolite T 1 in the vertical direction, assuming that theodolite T 1 and theodolite T 2 are mutually aiming at each other, and the angle in the vertical direction of theodolite T 1 is β AB , then h is solved as follows:
h=btanβAB h=btanβ AB
联立坐标转换方程和距离方程可以得到以下方程组,就可以求解经纬仪坐标系与立方镜坐标系之间的平移向量:The following equations can be obtained by combining the coordinate conversion equation and the distance equation, and the translation vector between the theodolite coordinate system and the cubic mirror coordinate system can be solved:
其中,xPT、xQT分别为经纬仪与立方镜自准直的面上十字叉丝点在经纬仪坐标系下x轴方向的坐标,T1、T2、T3为待求平移向量中的元素。Among them, x PT and x QT are the coordinates of the crosshair point on the self-collimation surface of the theodolite and the cube mirror in the x-axis direction of the theodolite coordinate system, and T 1 , T 2 , T 3 are the elements in the translation vector to be obtained .
步骤102,利用多监视相机拍摄靶标获得相机内外参数,再用经纬仪测量靶标部分角点角度,计算相机坐标系与经纬仪坐标系的转换矩阵,进而计算相机坐标系与世界坐标系的转换矩阵,具体如下:Step 102, use multiple monitoring cameras to shoot the target to obtain the internal and external parameters of the camera, then use the theodolite to measure the corner angle of the target, calculate the conversion matrix between the camera coordinate system and the theodolite coordinate system, and then calculate the conversion matrix between the camera coordinate system and the world coordinate system, specifically as follows:
(1)使用多监视相机各自拍摄位置不断变动的靶标,提取图像中的靶标角点,对于双目相机还需要进行左右图像的角点匹配;(1) Use multiple surveillance cameras to capture targets with constantly changing positions, and extract the target corners in the image. For binocular cameras, it is also necessary to match the corners of the left and right images;
(2)由于各个相机的内参数不变,而外参数随着每次靶标位置的不同而改变,通过多张靶标图片就能建立方程组求解相机的内外参数,靶标世界坐标系、相机坐标系和像素坐标系之间的关系如图4所示;(2) Since the internal parameters of each camera remain unchanged, and the external parameters change with each target position, a set of equations can be established to solve the internal and external parameters of the camera through multiple target pictures, the target world coordinate system, and the camera coordinate system The relationship between and the pixel coordinate system is shown in Figure 4;
通过下式可以建立靶标上各个角点与内参数的方程:The equations of each corner point on the target and internal parameters can be established by the following formula:
其中,dx和dy分别表示CCD上一个像素在x和y方向上的真实尺寸,f为相机焦距,s为相机倾斜系数,一般情况下为0,u0和v0为相机主点在像素坐标系下的坐标, 其中,fx、fy、u0、v0待标定的相机内参数。Among them, dx and dy respectively represent the real size of a pixel on the CCD in the x and y directions, f is the focal length of the camera, s is the tilt coefficient of the camera, generally 0, u 0 and v 0 are the pixel coordinates of the principal point of the camera coordinates under the system, Among them, f x , f y , u 0 , and v 0 are internal camera parameters to be calibrated.
(3)对于双目相机,在获得了单个相机内外参数后,还需要根据匹配的角度位置,联立解算出左右相机之间的坐标转换矩阵;(3) For a binocular camera, after obtaining the internal and external parameters of a single camera, it is also necessary to simultaneously solve the coordinate transformation matrix between the left and right cameras according to the matched angular position;
由于在单目标定时已经得到了每张靶标图像对应的左右相机外参数矩阵,因此对于靶标上任意一点PW,都可以通过下式求得该点在左右相机坐标系下的坐标值:Since the extrinsic parameter matrix of the left and right cameras corresponding to each target image has been obtained in the single target timing, for any point P W on the target, the coordinate value of the point in the left and right camera coordinate system can be obtained by the following formula:
其中,和/>分别为靶标世界坐标系到左右相机坐标系的坐标旋转矩阵和坐标平移矩阵。每张图像都对应以上四个矩阵参数,将多张图像的方程联立,就能获得左右相机坐标系之间的坐标转换矩阵如下:in, and /> are the coordinate rotation matrix and coordinate translation matrix from the target world coordinate system to the left and right camera coordinate systems, respectively. Each image corresponds to the above four matrix parameters. By combining the equations of multiple images, the coordinate transformation matrix between the left and right camera coordinate systems can be obtained as follows:
由于单目标定存在误差,因此利用每张图像对应的左右相机外参数矩阵求解出的R、T矩阵不一定完全相等,因此要通过反投影计算误差,并不断迭代找到最小误差对应的R、T矩阵的最优解。Due to errors in single-target positioning, the R and T matrices obtained by using the left and right camera extrinsic parameter matrices corresponding to each image may not be completely equal. Therefore, the error must be calculated by back-projection, and iteratively find the R and T corresponding to the minimum error. The optimal solution of the matrix.
(4)相机实际使用时存在如图5所示的畸变,标定出相机内外参数时,可以得到相机的畸变系数,接着需要对图像进行去畸变,将图像像素坐标根据内参数转成成像平面坐标系下坐标,将改坐标代入畸变校正方程中,得到的结果反算出新的像素坐标,对该非整数坐标进行整数插值,得到畸变校正后的图像;(4) When the camera is actually used, there is distortion as shown in Figure 5. When the internal and external parameters of the camera are calibrated, the distortion coefficient of the camera can be obtained, and then the image needs to be de-distorted, and the pixel coordinates of the image are converted into imaging plane coordinates according to the internal parameters System coordinates, substituting the modified coordinates into the distortion correction equation, and calculating the new pixel coordinates from the obtained results, and performing integer interpolation on the non-integer coordinates to obtain the distortion-corrected image;
通过下式可以实现畸变校正:Distortion correction can be achieved by the following formula:
其中,径向坐标(xu,yu)为理想相机所拍摄到的未畸变像点在像面上的坐标,(xd,yd)为实际像点在像面上的坐标,(xc,yc)为相机主点在像面上的坐标;/>为畸变半径,k1、k2为径向畸变系数,p1、p2为切向畸变系数。Among them, the radial coordinate (x u , y u ) are the coordinates of the undistorted image captured by the ideal camera on the image plane, (x d , y d ) are the coordinates of the actual image point on the image plane, (x c , y c ) is the coordinates of the principal point of the camera on the image plane; /> is the distortion radius, k 1 and k 2 are radial distortion coefficients, p 1 and p 2 are tangential distortion coefficients.
(5)对于双目相机,还需要将左右相机坐标系分别进行旋转至两坐标系各个坐标轴相互平行,且坐标平面共面,具体如下:(5) For binocular cameras, it is also necessary to rotate the coordinate systems of the left and right cameras so that the coordinate axes of the two coordinate systems are parallel to each other, and the coordinate planes are coplanar, as follows:
双目相机立体标定完成后可以获得左右相机之间的坐标旋转矩阵R和坐标平移矩阵T,如图6(a)所示;如图6(b),将左相机坐标系沿旋转矩阵R的正方向旋转R的一半,记该旋转矩阵为Rh1;如图6(c),再将右相机坐标系沿旋转矩阵R的反方向旋转R的一半,记该旋转矩阵为Rh2;经过左右相机一半的旋转后,左相机坐标系和右相机坐标系平行但不共面,如图6(d)所示;如图6(e),此时空间一点P在左右相机坐标系之间的坐标关系为PL=PR+Rh2T;将平行但不共面的左右相机坐标系旋转Rrec至与校正坐标系Orecxrecyreczrec平行,使左右相机坐标系平行且共面,如图6(f)所示,xrec轴与左右相机坐标系原点连线方向平行,xrecOreczrec平面与ZLOLORZR平面平行。After the stereo calibration of the binocular camera is completed, the coordinate rotation matrix R and the coordinate translation matrix T between the left and right cameras can be obtained, as shown in Figure 6(a); as shown in Figure 6(b), the left camera coordinate system along the rotation matrix R Rotate half of R in the positive direction, and record the rotation matrix as R h1 ; as shown in Figure 6(c), then rotate the right camera coordinate system by half of R in the opposite direction of the rotation matrix R, and record the rotation matrix as R h2 ; After half the rotation of the camera, the coordinate system of the left camera and the coordinate system of the right camera are parallel but not coplanar, as shown in Figure 6(d); as shown in Figure 6(e), at this time, a point P in the space is between the coordinate systems of the left and right cameras The coordinate relationship is P L =P R +R h2 T; rotate the parallel but non-coplanar left and right camera coordinate systems R rec to be parallel to the correction coordinate system O rec x rec y rec z rec , so that the left and right camera coordinate systems are parallel and co-planar As shown in Figure 6(f), the x rec axis is parallel to the line connecting the origins of the left and right camera coordinate systems, and the x rec O rec z rec plane is parallel to the Z L O L O R Z R plane.
对于校正坐标系,令t=Rh2T,可以建立校正坐标系坐标轴在相机坐标系中的投影为:yrec=[0 0 1]T×xrec,zrec=xrec×yrec,因此校正坐标系与相机坐标系之间的坐标旋转矩阵为Rrec=[xrec yrec zrec]T。最终得到的左右相机旋转矩阵为:For the correction coordinate system, let t=R h2 T, the projection of the coordinate axis of the correction coordinate system in the camera coordinate system can be established as: y rec =[0 0 1] T ×x rec , z rec =x rec ×y rec , so the coordinate rotation matrix between the correction coordinate system and the camera coordinate system is R rec =[x rec y rec z rec ] T . The resulting left and right camera rotation matrix is:
将左右相机像点坐标全部代入上式,可以得到双目相机立体校正后的图像,结果如图7所示,(a)为立体校正前靶标图像,(b)为立体校正后靶标图像。Substituting all the left and right camera image point coordinates into the above formula, the image after stereo correction of the binocular camera can be obtained. The result is shown in Figure 7, (a) is the target image before stereo correction, and (b) is the target image after stereo correction.
(6)利用经纬仪测量靶标的部分角点角度,解算其在经纬仪坐标系下的坐标,将其与相机参数标定时获得的角点在相机坐标系下的坐标结合,使用SVD分解法,求解出相机坐标系与经纬仪坐标系之间的坐标转换矩阵;(6) Use the theodolite to measure part of the corner angle of the target, solve its coordinates in the theodolite coordinate system, combine it with the coordinates of the corner points in the camera coordinate system obtained during camera parameter calibration, and use the SVD decomposition method to solve Coordinate transformation matrix between the camera coordinate system and theodolite coordinate system;
对多个角点构建如下矩阵:Construct the following matrix for multiple corner points:
其中,和/>为m个已知坐标的空间点在相机坐标系和经纬仪坐标系下坐标的平均值。对H进行SVD分解,就能得到两个标准正交矩阵和一个对角矩阵:in, and /> is the average value of coordinates of m known coordinates in the camera coordinate system and theodolite coordinate system. By SVD decomposition of H, two orthonormal matrices and a diagonal matrix can be obtained:
U·Δ·VT=SVD(H)U·Δ·V T =SVD(H)
其中,U为H·HT矩阵的归一化特征向量,V为HT·H矩阵的归一化特征向量,Δ为包含H·HT矩阵和HT·H矩阵特征值的对角矩阵。Among them, U is the normalized eigenvector of H T matrix, V is the normalized eigenvector of H T H matrix, Δ is a diagonal matrix containing H T matrix and H T H matrix eigenvalues .
计算行列式|V·UT|的符号,如果大于0,则坐标旋转矩阵为如果小于0,则将V的任意一列的符号改变,再代入上式求解坐标旋转矩阵。接着就能利用空间一点P在两个坐标系下的坐标值求解坐标平移矩阵为/> Calculate the sign of the determinant |V U T |, if it is greater than 0, the coordinate rotation matrix is If it is less than 0, change the sign of any column of V, and then substitute it into the above formula to solve the coordinate rotation matrix. Then you can use the coordinate values of a point P in the two coordinate systems to solve the coordinate translation matrix as />
(7)将相机坐标系与经纬仪坐标系之间的坐标转换矩阵和经纬仪坐标系与立方镜坐标系之间的坐标转换矩阵相乘,就能得到相机坐标系与立方镜坐标系(世界坐标系)之间的坐标转换矩阵,即至此多监视相机全局标定完成。(7) Multiply the coordinate transformation matrix between the camera coordinate system and the theodolite coordinate system and the coordinate transformation matrix between the theodolite coordinate system and the cubic mirror coordinate system to obtain the camera coordinate system and the cubic mirror coordinate system (world coordinate system ), the coordinate transformation matrix between So far, the global calibration of multi-surveillance cameras is completed.
步骤103,利用全局标定后的相机拍摄物体,使用视差原理双目重建三维点云,使用MVS方法单目重建三维点云,具体如下:Step 103, using the globally calibrated camera to photograph the object, using the parallax principle to reconstruct the 3D point cloud binocularly, and using the MVS method to reconstruct the 3D point cloud monocularly, as follows:
(1)使用全局标定后的各个监视相机同时拍摄物体;(1) Use each surveillance camera after global calibration to shoot objects at the same time;
(2)对于双目相机,利用视差原理求解左右图像的视差图,根据视差图重建三维点云;(2) For the binocular camera, use the parallax principle to solve the disparity map of the left and right images, and reconstruct the 3D point cloud according to the disparity map;
(3)对于单目相机,提取图像关键点,并进行关键点匹配,然后使用利用关键点重建稀疏点云,再使用MVS方法重建稠密点云。(3) For the monocular camera, extract the key points of the image and perform key point matching, then use the key points to reconstruct the sparse point cloud, and then use the MVS method to reconstruct the dense point cloud.
步骤104,将重建点云的多余部分裁剪,对点云进行滤波,并进行降采样,具体如下:Step 104, clipping the excess part of the reconstructed point cloud, filtering the point cloud, and performing downsampling, as follows:
(1)通过对主体部分进行判断,设定三坐标轴阈值或球形区域将多余部分裁剪;(1) By judging the main part, set the three-coordinate axis threshold or the spherical area to cut off the redundant part;
(2)对点云使用双边滤波、高斯滤波、条件滤波、直通滤波、基于随机采样一致性(RANSAC)滤波等滤波方法,去除离群点和噪声点;(2) Use bilateral filtering, Gaussian filtering, conditional filtering, straight-through filtering, random sampling consistency (RANSAC) filtering and other filtering methods for point clouds to remove outliers and noise points;
(3)为了提高点云配准速度,要对点云进行降采样,将点云按照等宽度划分成多个体素栅格,在每个体素栅格内保留与中心点最近的点,剔除其余点,使得点云点数大幅降低但总体特征不变。(3) In order to improve the speed of point cloud registration, it is necessary to downsample the point cloud, divide the point cloud into multiple voxel grids according to the same width, keep the point closest to the center point in each voxel grid, and remove the rest points, so that the number of point cloud points is greatly reduced but the overall characteristics remain unchanged.
步骤105,利用监视相机全局标定出的相机与世界坐标系的转换矩阵对预处理后的点云进行坐标变换,实现点云粗配准,具体如下:Step 105, using the transformation matrix between the camera and the world coordinate system calibrated globally by the monitoring camera to perform coordinate transformation on the preprocessed point cloud to realize the coarse registration of the point cloud, as follows:
(1)根据单目相机拍摄图像求解单目相机重建点云在单目相机坐标系下的坐标;(1) Solve the coordinates of the monocular camera reconstruction point cloud in the monocular camera coordinate system according to the image taken by the monocular camera;
对于两个不同位置拍摄的图像,首先要求解两个相机位置之间的坐标转换矩阵,可以利用对极几何关系来求解。如图8所示,C和C’为两个相机位置的相机光心,空间中任意一点M在两个相机像面上的像点分别为m和m’;由于CM上任意一点M1在像面I上的像点均为m,而在像面I’上的像点都在对极线l′m上,同理C′M上任意一点M′1在像面I上的像点都在对极线lm上;并且两个像面上的对极线都一定过相机光心连线CC′与对应像面的交点e和e’。由于m和m’分别为两个像面上的像点,因此可以利用对极几何关系将两图像中对应特征点之间的匹配过程从二维变为一维,降低匹配的计算量和耗费时间。For images taken at two different positions, it is first necessary to solve the coordinate transformation matrix between the two camera positions, which can be solved by using the epipolar geometric relationship. As shown in Figure 8, C and C' are the optical centers of the two cameras, and the image points of any point M in the space on the image planes of the two cameras are m and m'respectively; since any point M 1 on CM is in The image points on the image surface I are all m, and the image points on the image surface I' are all on the epipolar line l' m . Similarly, the image points of any point M'1 on the image surface I on C'M They are all on the epipolar line l m ; and the epipolar lines on the two image planes must pass through the intersection e and e' of the camera optical center line CC' and the corresponding image plane. Since m and m' are the image points on the two image planes, the matching process between the corresponding feature points in the two images can be changed from two-dimensional to one-dimensional by using the epipolar geometric relationship, reducing the amount of calculation and cost of matching time.
假设空间点M在两相机坐标系下坐标分别为X=[x y z]T和X′=[x′ y′ z′]T,在两图像中的像素坐标分别为p=[u v 1]T和p′=[u′ v′ 1]T,则在两归一化像面上的坐标分别为和/>两相机坐标系之间坐标转换矩阵满足X=RX′+T,则在像面坐标系下满足:Assume that the coordinates of the spatial point M in the two camera coordinate systems are X=[xyz] T and X′=[x′ y′ z′] T respectively, and the pixel coordinates in the two images are p=[uv 1] T and p′=[u′ v′ 1] T , then the coordinates on the two normalized image planes are and /> The coordinate transformation matrix between the two camera coordinate systems satisfies X=RX′+T, then it satisfies in the image plane coordinate system:
由于z和z′为未知参数,为了消除其影响,在上式两端左边同时叉乘T,得到:Since z and z' are unknown parameters, in order to eliminate their influence, cross-multiply T on the left side of both ends of the above formula at the same time, and get:
再对上式两端同时左乘得到:Then multiply both sides of the above formula to the left at the same time get:
由于向量垂直于向量T和/>因此/>与/>点乘结果为0,因此可以得到:due to vector perpendicular to vector T and /> Therefore /> with /> The result of dot product is 0, so we can get:
其中,E=T×R为本质矩阵。考虑到像面坐标系与像素坐标系之间通过内参矩阵K关联,因此根据式上式可知像素坐标满足:Among them, E=T×R is the essential matrix. Considering that the image plane coordinate system and the pixel coordinate system are related through the internal reference matrix K, it can be known from the above formula that the pixel coordinates satisfy:
pTK-TT×RK-1p′=pTFp′=0p T K -T T×RK -1 p'=p T Fp'=0
其中,F=K-TT×RK-1为基本矩阵。由于内参矩阵K已知,因此只需要求解坐标转换矩阵中的六个参数。在两个图像中利用特征点匹配算法寻找匹配的特征点,利用多对特征点构建式上式的方程,再通过最小二乘法就能求得坐标转换矩阵参数。Wherein, F=K -T T×RK -1 is the fundamental matrix. Since the internal reference matrix K is known, it is only necessary to solve the six parameters in the coordinate transformation matrix. Use the feature point matching algorithm to find the matching feature points in the two images, use multiple pairs of feature points to construct the above equation, and then use the least square method to obtain the coordinate transformation matrix parameters.
获得坐标转换矩阵之后,就可以使用三角测量来估计空间点的深度,进而求解出空间点在单目相机坐标系下的坐标。对式两端左边同时叉乘/>可以得到:After obtaining the coordinate transformation matrix, you can use triangulation to estimate the depth of the spatial point, and then solve the coordinates of the spatial point in the monocular camera coordinate system. Pair Cross multiply the left side of both ends at the same time /> can get:
将像素坐标代入上式可得:Substituting the pixel coordinates into the above formula can get:
z′K-1p×RK-1p′+K-1p×T=0z'K -1 p×RK -1 p'+K -1 p×T=0
由于上式中只有z′未知,因此可以通过上式求解出z′,同理可以求解出z。像素坐标和相机坐标系下坐标满足zp=KX,即可以解算出相机坐标系下的坐标。因此利用对极几何关系和三角测量就可以解算出空间点在单目相机坐标系下的坐标。Since only z' is unknown in the above formula, z' can be solved through the above formula, and z can be solved similarly. The pixel coordinates and the coordinates in the camera coordinate system satisfy zp=KX, that is, the coordinates in the camera coordinate system can be calculated. Therefore, the coordinates of the spatial point in the monocular camera coordinate system can be calculated by using the epipolar geometric relationship and triangulation.
(2)根据双目相机拍摄图像求解双目相机重建点云在双目相机坐标系下的坐标;(2) Solve the coordinates of the binocular camera reconstruction point cloud in the binocular camera coordinate system according to the images captured by the binocular camera;
由于双目相机两相机相对位置固定,因此相比于单目相机点云坐标求解,省去了两相机之间坐标转换矩阵的解算步骤。双目相机左右目都满足内参矩阵方程,并且考虑到通过匹配的角点可以求得左右相机之间的坐标转换矩阵,因此对左右图像进行特征点匹配后,就可以得到如下方程组:Since the relative positions of the two cameras of the binocular camera are fixed, compared with the point cloud coordinate solution of the monocular camera, the step of calculating the coordinate transformation matrix between the two cameras is omitted. Both the left and right eyes of the binocular camera satisfy the internal parameter matrix equation, and considering that the coordinate transformation matrix between the left and right cameras can be obtained through the matched corner points, after matching the feature points of the left and right images, the following equations can be obtained:
在上式中未知数有xl、yl、zl、xr、yr、zr共六个,而方程共有七个,因此可以直接求解出空间点在双目相机坐标系下的坐标。In the above formula, there are six unknowns x l , y l , z l , x r , y r , and z r , and there are seven equations in total. Therefore, the coordinates of the spatial point in the binocular camera coordinate system can be directly solved.
(3)用全局标定出的各相机坐标系与世界坐标系之间的坐标转换矩阵对(1)、(2)步中求解出的点云在各自相机坐标系下的坐标进行变换,全部统一到世界坐标系中,实现点云粗配准。(3) Use the coordinate transformation matrix between each camera coordinate system calibrated globally and the world coordinate system to transform the coordinates of the point clouds solved in steps (1) and (2) in their respective camera coordinate systems, and all are unified In the world coordinate system, the point cloud coarse registration is realized.
此步骤粗配准的精度取决于相机全局标定精度,由于使用的是经纬仪与立方镜进行标定,因此标定精度非常高,对应的点云粗配准精度也较高,为点云精配准提供了良好的条件。The accuracy of coarse registration in this step depends on the global calibration accuracy of the camera. Since theodolite and cubic mirror are used for calibration, the calibration accuracy is very high, and the corresponding point cloud coarse registration accuracy is also high. good conditions.
步骤106,使用速度很快、精度很高的AAICP算法对粗配准后的点云进行精配准,具体如下:Step 106, use the AAICP algorithm with high speed and high precision to perform fine registration on the point cloud after rough registration, as follows:
(1)在源点云P中随机选取点集pi∈P;(1) Randomly select a point set p i ∈ P in the source point cloud P;
(2)从目标点云Q中估计对应点集qi∈Q,使得‖qi-pi‖=min;(2) Estimate the corresponding point set q i ∈ Q from the target point cloud Q, so that ‖q i -p i ‖=min;
(3)对这两个点集使用SVD分解法,解算出两个点云之间的坐标转换矩阵;(3) Use the SVD decomposition method for the two point sets to solve the coordinate transformation matrix between the two point clouds;
(4)对pi使用(3)中计算出的坐标转换矩阵进行变换,得到新的对应点集p′i={p′i=R·pi+T,pi∈P};(4) Transform p i using the coordinate transformation matrix calculated in (3), and obtain a new corresponding point set p′ i ={p′ i =R p i +T, p i ∈ P};
(5)计算p′i与qi的平均距离 (5) Calculate the average distance between p′ i and q i
(6)设定距离阈值,如果d小于给定距离阈值或迭代次数大于最大迭代次数,则停止迭代过程,此时计算出的坐标转换矩阵为最优坐标转换矩阵,否则返回(2)继续迭代;(6) Set the distance threshold, if d is less than the given distance threshold or the number of iterations is greater than the maximum number of iterations, then stop the iterative process, the coordinate transformation matrix calculated at this time is the optimal coordinate transformation matrix, otherwise return to (2) to continue the iteration ;
(7)根据Anderson加速思想来加速ICP的收敛过程,再不失精度的情况下大大提高点云配准的速度。(7) Accelerate the convergence process of ICP according to Anderson's accelerated thinking, and greatly improve the speed of point cloud registration without losing accuracy.
步骤107,将配准后的点云合并成一个大点云,并判断重复点进行删除,得到无缺陷的物体三维点云,具体如下:Step 107, merging the registered point clouds into a large point cloud, and judging the repeated points to delete to obtain a defect-free 3D point cloud of the object, as follows:
(1)直接将多个配准后的点云直接相加和;(1) Directly add and sum multiple registered point clouds;
(2)对加和后的点云构建KD树,三维点构建KD树示意如图9所示;(2) Construct the KD tree for the point cloud after the addition, and the three-dimensional point constructs the KD tree as shown in Figure 9;
(3)逐个遍历点云中的所有点,对于某一点,从根结点开始,向下访问KD数,将该点对应维度数值与该结点对应维度中位数进行比较,若小于中位数,则访问左子树,否则访问右子树,直到访问到叶子结点,计算该点与叶子结点中保存数据间的距离,若距离小于半径阈值,则对叶子结点中的点进行标记;(3) Traverse all the points in the point cloud one by one. For a certain point, start from the root node, access the KD number downwards, compare the dimension value corresponding to the point with the median of the dimension corresponding to the node, if it is less than the median number, then visit the left subtree, otherwise visit the right subtree until the leaf node is accessed, calculate the distance between the point and the data stored in the leaf node, if the distance is less than the radius threshold, then perform mark;
(4)进行回溯操作,从叶子结点往上访问,判断是否还有距离小于阈值的结点,如果有,进入该结点继续向下访问叶子结点标记距离小于阈值的点,在后续遍历过程中,如果遇到被标记过的点,则直接跳过该点;(4) Perform a backtracking operation, visit from the leaf node upwards, and judge whether there are nodes whose distance is less than the threshold. If so, enter the node and continue to visit the leaf node to mark the point whose distance is less than the threshold. In the subsequent traversal During the process, if a marked point is encountered, it will be skipped directly;
(5)将大点云中被标记过的点删除,只保留未被标记的点,点云融合完成。(5) Delete the marked points in the large point cloud, and only keep the unmarked points, and the point cloud fusion is completed.
实施例2:Example 2:
一种基于多监视相机和点云融合的无缺陷三维点云重建装置,包括:A defect-free 3D point cloud reconstruction device based on multi-surveillance cameras and point cloud fusion, including:
经纬仪三维坐标测量模块,被配置为,测量空间任意点角度解算其三维坐标;The three-dimensional coordinate measurement module of the theodolite is configured to measure the angle of any point in the space and calculate its three-dimensional coordinates;
相机全局标定模块,被配置为,获得各监视相机内外参数和相机坐标系与世界坐标系的转换矩阵;The camera global calibration module is configured to obtain the internal and external parameters of each monitoring camera and the transformation matrix between the camera coordinate system and the world coordinate system;
多监视相机三维点云融合模块,被配置为,进行点云预处理,并根据相机全局标定的结果实现点云粗配准,再使用传统算法进行点云精配准,最后将多个点云融合成一个完整无缺陷的物体三维点云。The multi-camera 3D point cloud fusion module is configured to perform point cloud preprocessing, and realize coarse point cloud registration according to the results of camera global calibration, then use traditional algorithms for fine point cloud registration, and finally combine multiple point clouds Fusion into a complete and defect-free object 3D point cloud.
本申请提出的方法效果如图10所示,其中(a)和(b)为源点云和目标点云,(c)和(d)为AAICP点云精配准结果俯视图和侧视图。相比于传统方法,本申请提出的方法具有更高的三维点云重建精度和速度,且鲁棒性更好。本申请通过使用经纬仪与立方镜对多监视相机进行标定,更精确的获得了相机的内外参数;使用标定结果实现点云粗配准,与传统方法中使用点云粗配准算法相比,速度更快、鲁棒性更好;并使用AAICP算法实现更精确更快速的点云精配准,从而提高了系统的实时性,可以对工业中物体点云无缺陷实时重建、研究物体表面形貌、地形分析、三维可视化等方面起到重要作用。The effect of the method proposed in this application is shown in Figure 10, where (a) and (b) are the source point cloud and target point cloud, and (c) and (d) are the top view and side view of the AAICP point cloud fine registration results. Compared with traditional methods, the method proposed in this application has higher 3D point cloud reconstruction accuracy and speed, and has better robustness. This application uses theodolite and cubic mirror to calibrate multi-surveillance cameras, and obtains the internal and external parameters of the cameras more accurately; uses the calibration results to achieve point cloud coarse registration. Compared with the traditional method using point cloud coarse registration algorithm, the speed Faster and better robustness; and use the AAICP algorithm to achieve more accurate and faster point cloud fine registration, thereby improving the real-time performance of the system, and can reconstruct the point cloud of objects in the industry without defects in real time and study the surface morphology of objects , terrain analysis, and 3D visualization play an important role.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-described embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still implement the foregoing embodiments Modifications to the technical solutions described in the examples, or equivalent replacements for some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the various embodiments of the application, and should be included in the Within the protection scope of this application.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310636189.2A CN116681827A (en) | 2023-05-31 | 2023-05-31 | A defect-free 3D point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310636189.2A CN116681827A (en) | 2023-05-31 | 2023-05-31 | A defect-free 3D point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116681827A true CN116681827A (en) | 2023-09-01 |
Family
ID=87780406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310636189.2A Pending CN116681827A (en) | 2023-05-31 | 2023-05-31 | A defect-free 3D point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116681827A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116863086A (en) * | 2023-09-04 | 2023-10-10 | 武汉国遥新天地信息技术有限公司 | Rigid body stable reconstruction method for optical motion capture system |
CN116862999A (en) * | 2023-09-04 | 2023-10-10 | 华东交通大学 | A calibration method, system, equipment and medium for dual-camera three-dimensional measurement |
-
2023
- 2023-05-31 CN CN202310636189.2A patent/CN116681827A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116863086A (en) * | 2023-09-04 | 2023-10-10 | 武汉国遥新天地信息技术有限公司 | Rigid body stable reconstruction method for optical motion capture system |
CN116862999A (en) * | 2023-09-04 | 2023-10-10 | 华东交通大学 | A calibration method, system, equipment and medium for dual-camera three-dimensional measurement |
CN116863086B (en) * | 2023-09-04 | 2023-11-24 | 武汉国遥新天地信息技术有限公司 | Rigid body stable reconstruction method for optical motion capture system |
CN116862999B (en) * | 2023-09-04 | 2023-12-08 | 华东交通大学 | Calibration method, system, equipment and medium for three-dimensional measurement of double cameras |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6722323B2 (en) | System and method for imaging device modeling and calibration | |
CN107977997B (en) | A camera self-calibration method combined with lidar 3D point cloud data | |
CN116681827A (en) | A defect-free 3D point cloud reconstruction method and device based on multi-surveillance cameras and point cloud fusion | |
US20180051982A1 (en) | Object-point three-dimensional measuring system using multi-camera array, and measuring method | |
CN108489395A (en) | Vision measurement system structural parameters calibration and affine coordinate system construction method and system | |
CN103198524B (en) | A kind of three-dimensional reconstruction method for large-scale outdoor scene | |
CN108416812B (en) | Calibration method of single-camera mirror image binocular vision system | |
CN109579695B (en) | Part measuring method based on heterogeneous stereoscopic vision | |
Agrawal et al. | Analytical forward projection for axial non-central dioptric and catadioptric cameras | |
CN101354796B (en) | Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model | |
CN112258586B (en) | A Calibration Method for Single-plane Mirror Stereo Vision Model Parameters | |
CN114111637A (en) | A 3D reconstruction method based on virtual binocular fringe structured light | |
CN107038753B (en) | Stereoscopic 3D reconstruction system and method | |
CN110874854A (en) | Large-distortion wide-angle camera binocular photogrammetry method based on small baseline condition | |
CN110349257B (en) | Phase pseudo mapping-based binocular measurement missing point cloud interpolation method | |
CN113706635B (en) | Long-focus camera calibration method based on point feature and line feature fusion | |
CN104794718A (en) | Single-image CT (computed tomography) machine room camera calibration method | |
CN111649694B (en) | An Implicit Phase-Disparity Mapping Method for Interpolating Missing Point Clouds in Binocular Measurements | |
CN117994358A (en) | Dental operation microscope calibration method with high accuracy | |
CN103258327B (en) | A kind of single-point calibration method based on two degrees of freedom video camera | |
CN116129033A (en) | Binocular structure photodynamic three-dimensional reconstruction method based on single amplitude and single frequency | |
Jurjević et al. | 3D data acquisition based on OpenCV for close-range photogrammetry applications | |
CN118368398A (en) | Multispectral-based large-view-field binocular stereoscopic vision perception method | |
CN109990756A (en) | A binocular ranging method and system | |
CN117218203A (en) | Calibration method, device, equipment and storage medium of camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |