CN109544599B - A 3D Point Cloud Registration Method Based on Camera Pose Estimation - Google Patents

A 3D Point Cloud Registration Method Based on Camera Pose Estimation Download PDF

Info

Publication number
CN109544599B
CN109544599B CN201811400144.0A CN201811400144A CN109544599B CN 109544599 B CN109544599 B CN 109544599B CN 201811400144 A CN201811400144 A CN 201811400144A CN 109544599 B CN109544599 B CN 109544599B
Authority
CN
China
Prior art keywords
dimensional
point cloud
pts
dimensional point
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811400144.0A
Other languages
Chinese (zh)
Other versions
CN109544599A (en
Inventor
荆海龙
朱江平
周佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201811400144.0A priority Critical patent/CN109544599B/en
Publication of CN109544599A publication Critical patent/CN109544599A/en
Application granted granted Critical
Publication of CN109544599B publication Critical patent/CN109544599B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种基于相机位姿估计的三维点云配准方法,通过将三维建模过程问题与三维点云自动配准问题进行有机结合,对多视角三维建模过程进行流程优化,从而将三维点云的自动配准问题转化为相邻视角下相机的相对位置姿态估计问题。而且,本发明与传统的点云配准方法相比,充分利用了相机的内参数来简化对应点点集的求取过程,提高了求取速度,同时根据三维建模过程中三维点的可靠性信息来对对应点点集进行筛选,从而有效避免“异常点”对三维点云匹配结果的影响,加快匹配过程的收敛速度并提高匹配精度。

Figure 201811400144

The invention discloses a three-dimensional point cloud registration method based on camera pose estimation. By organically combining the three-dimensional modeling process problem and the three-dimensional point cloud automatic registration problem, the process of the multi-view three-dimensional modeling process is optimized, thereby The problem of automatic registration of 3D point cloud is transformed into the problem of relative position and pose estimation of cameras from adjacent viewpoints. Moreover, compared with the traditional point cloud registration method, the present invention makes full use of the internal parameters of the camera to simplify the process of obtaining the corresponding point set, and improves the speed of obtaining. Information to filter the corresponding point set, so as to effectively avoid the influence of "abnormal points" on the 3D point cloud matching results, speed up the convergence speed of the matching process and improve the matching accuracy.

Figure 201811400144

Description

一种基于相机位姿估计的三维点云配准方法A 3D Point Cloud Registration Method Based on Camera Pose Estimation

技术领域technical field

本发明涉及逆向工程领域,特别涉及一种基于相机位姿估计的三维点云配准方法。The invention relates to the field of reverse engineering, in particular to a three-dimensional point cloud registration method based on camera pose estimation.

背景技术Background technique

随着中国制造2025以及世界各国“工业4.0”计划的推进,三维建模技术快速发展,迅速渗透到众多制造领域中。自微软Kinect深度传感器、英特尔realsense、苹果公司iphone X的上市,三维建模技术迅速风靡消费电子领域。三维建模技术快速发展,在众多行业与领域中均获得了广泛应用。With the advancement of Made in China 2025 and the "Industry 4.0" plan of various countries in the world, 3D modeling technology has developed rapidly and has rapidly penetrated into many manufacturing fields. Since the launch of Microsoft's Kinect depth sensor, Intel's realsense, and Apple's iphone X, 3D modeling technology has rapidly swept the consumer electronics field. With the rapid development of 3D modeling technology, it has been widely used in many industries and fields.

通过测量方式获取物体三维信息是三维建模领域最精确的物体空间几何信息获取方法。通常将三维测量所获得的三维点的集合称为点云。受阴影和遮挡影响,绝大多数三维测量仪器都不能只通过一次测量即获取完整的物体形貌信息,一次测量只能采集某一视角下的物体某一部分表面的三维数据。要完成物体整个表面的三维测量,需要将不同视角下对物体的三维测量结构进行数据融合,由于多次采集的三维数据并不在同一坐标系下,为完成多视角三维点云数据融合,需将各个视角得到的三维点云统一到同一个坐标系下,这一过程称为点云配准。多视角三维点云的配准是后续三维点云数据拼接融合的基础,点云配准的速度和精度直接影响三维建模质量,影响着三维点云数据在实际业务流程中的应用效果。Obtaining 3D information of objects by measuring is the most accurate method for obtaining spatial geometric information of objects in the field of 3D modeling. The collection of 3D points obtained by 3D measurement is usually called a point cloud. Affected by shadows and occlusions, most 3D measuring instruments cannot obtain complete object topography information through only one measurement. One measurement can only collect 3D data of a certain part of the surface of an object from a certain viewing angle. To complete the 3D measurement of the entire surface of the object, it is necessary to fuse the 3D measurement structure of the object from different perspectives. Since the 3D data collected multiple times are not in the same coordinate system, in order to complete the fusion of multi-view 3D point cloud data, it is necessary to The three-dimensional point clouds obtained from various perspectives are unified into the same coordinate system, and this process is called point cloud registration. The registration of multi-view 3D point clouds is the basis for the subsequent stitching and fusion of 3D point cloud data. The speed and accuracy of point cloud registration directly affect the quality of 3D modeling and the application effect of 3D point cloud data in actual business processes.

在点云配准领域,最为经典与使用的方法是Besl等提出的最近点迭代法ICP算法(Besl P J,Mckay N D.A method for registration of 3D shapes.IEEE),利用牛顿迭代或者搜索方法寻找两组点云对应的最近点对,并采用欧氏距离作为目标函数进行迭代,从而得到三维的刚体变换。由于ICP算法具有较高的精确度,很快就成为了多视点云配准中的主流算法,随着ICP算法的广泛应用,研究者们对该算法做了许多详细的研究。目前适用的方法大多为该方法的发展与改进。In the field of point cloud registration, the most classic and used method is the closest point iteration method ICP algorithm proposed by Besl et al. (Besl P J, Mckay N D.A method for registration of 3D shapes. IEEE), which uses Newton iteration or search method to find two groups of The closest point pair corresponding to the point cloud is iterated using the Euclidean distance as the objective function to obtain a three-dimensional rigid body transformation. Due to the high accuracy of the ICP algorithm, it quickly became the mainstream algorithm in multi-view point cloud registration. With the wide application of the ICP algorithm, researchers have done many detailed studies on the algorithm. Most of the currently applicable methods are the development and improvement of this method.

Martin Magnusson提出的正态分布变换(The Three-Dimensional NormalDistributions Transform-an Efficient Representation for Registration SurfaceAnalysis and Loop Detection)应用与三维点的统计模型,使用标准的最优化技术来确定两个点云间的最优匹配,因为其在配准过程中不利用各对应点的特征计算和匹配,所以处理时间比最近点迭代方法快。The Normal Distribution Transform (The Three-Dimensional NormalDistributions Transform-an Efficient Representation for Registration SurfaceAnalysis and Loop Detection) proposed by Martin Magnusson is applied to the statistical model of three-dimensional points, using standard optimization techniques to determine the optimal value between two point clouds Matching, because it does not utilize the feature calculation and matching of each corresponding point in the registration process, so the processing time is faster than the closest point iteration method.

但上述两类已有方法,均是从三维点云数据本身来考虑点云配准问题,并没有有效地利用单视角三维数据采集过程中的信息,而且,目前还没有关于将三维数据采集与点云融合拼接统一来考虑整个物体的完整表面建模过程的方法相关报道。However, the above two types of existing methods both consider the point cloud registration problem from the 3D point cloud data itself, and do not effectively utilize the information in the process of single-view 3D data acquisition. A method for unifying point cloud fusion and stitching to consider the complete surface modeling process of the entire object is reported.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于:提供一种新的三维点云配准方法,能够有效地利用单视角三维数据采集过程中的信息,以提高三维点云的点云配准精确度和速度。The purpose of the present invention is to provide a new three-dimensional point cloud registration method, which can effectively utilize the information in the single-view three-dimensional data acquisition process to improve the point cloud registration accuracy and speed of the three-dimensional point cloud.

为了实现上述发明目的,本发明提供了以下技术方案:In order to achieve the above-mentioned purpose of the invention, the present invention provides the following technical solutions:

一种基于相机位姿估计的三维点云配准方法,其包括以下步骤:A three-dimensional point cloud registration method based on camera pose estimation, which includes the following steps:

步骤S01:读取一组新的三维建模图像,而且,能够基于读取的所述三维建模图像完成三维建模;Step S01: read a new set of three-dimensional modeling images, and can complete three-dimensional modeling based on the read three-dimensional modeling images;

步骤S02:对所述三维建模图像进行可靠区域提取,得到可靠图;Step S02: extracting a reliable region on the three-dimensional modeling image to obtain a reliable map;

步骤S03:根据所述可靠图以及标定好的测量系统参数信息,计算出可靠区域内每一点像素点(u,v)对应的三维坐标Ptsu,v,并由计算出的所有所述三维坐标Ptsu,v形成当前视角的三维点云{Ptscur};Step S03: According to the reliable map and the calibrated measurement system parameter information, calculate the three-dimensional coordinates Pts u,v corresponding to each pixel point (u,v) in the reliable area, and calculate all the three-dimensional coordinates calculated by the three-dimensional coordinates Pts u,v Pts u, v form the 3D point cloud {Pts cur } of the current viewing angle;

步骤S04:对当前视角的三维点云{Ptscur}进行深度信息特征提取和进行颜色信息特征提取,得到由空间域特征和颜色域特征构成的多元特征{Featurescur};Step S04: extracting the depth information feature and the color information feature of the three-dimensional point cloud {Pts cur } of the current viewing angle to obtain a multi-feature {Features cur } composed of the spatial domain feature and the color domain feature;

步骤S05:采用相机位置姿态的变换来表示被测目标与相机件的相对运动,并初始化上一视角与当前视角相机间的相对位姿RTrelative,其中位移参数T为0,姿态参数R为单位阵;Step S05: use the transformation of the position and attitude of the camera to represent the relative motion of the object under test and the camera device, and initialize the relative pose RT relative between the camera of the previous viewing angle and the current viewing angle, wherein the displacement parameter T is 0, and the attitude parameter R is the unit array;

步骤S06:将上一视角转到当前视角,并根据相机的内参数和相对位姿,计算出上一视角的三维点云{Ptslast}中的每一个三维数据点对应的像素坐标(u',v'),并在当前三维建模图像的可靠区域内查找像素坐标(u',v')欧氏距离最小的像素坐标(u,v)nearest,以上一视角的三维点云{Ptslast}中像素坐标(u',v')所对应的三维点与{Ptscur}中像素坐标(u,v)nearest所对应的三维点构成一个对应点点对Pair(u,v),并以所有对应点点对构成对应点点集{Pairs};Step S06: Turn the previous viewing angle to the current viewing angle, and calculate the pixel coordinates (u′ corresponding to each three-dimensional data point in the three-dimensional point cloud {Pts last } of the previous viewing angle according to the internal parameters and relative pose of the camera. ,v'), and find the pixel coordinates (u,v) nearest with the pixel coordinates (u',v') with the smallest Euclidean distance in the reliable area of the current 3D modeling image, and the 3D point cloud from the above perspective {Pts last } The three-dimensional point corresponding to the pixel coordinates (u', v') in {Pts cur } and the three-dimensional point corresponding to the pixel coordinates (u, v) nearest in {Pts cur } form a corresponding point-to-point pair Pair (u, v), and use all The corresponding point-to-point pairs constitute the corresponding point-point set {Pairs};

步骤S07:基于相似性度量,对所述对应点点集{Pairs}进行筛选,得到可信对应点点集{Pairsrelability};Step S07: Based on the similarity measure, filter the corresponding point set {Pairs} to obtain a credible corresponding point set {Pairs relability };

步骤S08:基于所述可信对应点点集{Pairsrelability},计算对应点之间的旋转平移矩阵RTaccmu,将RTaccmu通过矩阵运算作用于上一视角与当前视角相机间的相对位姿RTrelative,并使用该计算结果更新相对位姿RTrelativeStep S08: Based on the credible corresponding point set {Pairs relability }, calculate the rotation and translation matrix RT accmu between the corresponding points, and apply the RT accmu to the relative pose RT relative between the camera of the previous viewing angle and the current viewing angle through matrix operation , and use the calculation result to update the relative pose RT relative ;

步骤S09:判断是否满足迭代终止条件;若满足,则进入步骤10,否则重新进入步骤S06;Step S09: judge whether the iteration termination condition is met; if so, go to step 10, otherwise go to step S06 again;

步骤S10:根据步骤S09迭代终止的具体原因,判断当前视角的三维点云{Ptscur}是否是关键帧,若是关键帧,则进入步骤11,若不是关键帧,则进入步骤S16;Step S10: According to the specific reason for the iterative termination in step S09, determine whether the three-dimensional point cloud {Pts cur } of the current viewing angle is a key frame, if it is a key frame, then go to step 11, if it is not a key frame, then go to step S16;

其中,判断当前视角的三维点云{Ptscur}是否为关键帧的方式为:Among them, the method of judging whether the three-dimensional point cloud {Pts cur } of the current viewing angle is a key frame is as follows:

(1)若当前视角的三维测量为首次测量,则当前视角的三维点云默认为关键帧;(1) If the 3D measurement of the current perspective is the first measurement, the 3D point cloud of the current perspective is the key frame by default;

(2)若当前视角的三维点云{Ptscur}与上一视角的三维点云{Ptslast}配准成功,且当前视角的三维测量与上一关键帧的三维测量已间隔NKeyFrameInterval次三维测量或当前视角的三维测量的相对位姿RTrelative与上一关键帧的三维点云的相对位姿超过一定阈值,则当前视角的三维点云{Ptscur}为第一类关键帧;(2) If the 3D point cloud {Pts cur } of the current perspective is successfully registered with the 3D point cloud {Pts last } of the previous perspective, and the 3D measurement of the current perspective and the 3D measurement of the previous key frame have been separated by N KeyFrameInterval times 3D If the relative pose RT relative of the measurement or the 3D measurement of the current perspective and the relative pose of the 3D point cloud of the previous key frame exceeds a certain threshold, then the 3D point cloud {Pts cur } of the current perspective is the first type of key frame;

(3)若当前视角的三维点云{Ptscur}与上一视角的三维点云{Ptslast}配准不成功,则当前视角三维点云为第二类关键帧;(3) If the registration of the 3D point cloud {Pts cur } of the current perspective and the 3D point cloud {Pts last } of the previous perspective is unsuccessful, the 3D point cloud of the current perspective is the second type of key frame;

步骤S11:判断当前关键帧的类型,若当前关键帧为第一类关键帧,则进入步骤S12,若当前关键帧为第二类关键帧,则进入步骤S13;Step S11: determine the type of the current key frame, if the current key frame is the first type of key frame, then go to step S12, if the current key frame is the second type of key frame, then go to step S13;

步骤S12:将已完成配准的所有三维点云{Ptsall}作为上一视角的三维点云{Ptslast},并以经过步骤S08更新后的相对位姿RTrelative为相对位姿初始值,执行步骤S05至步骤S09,并完成三维点云配准后将更新后的相对位姿RTrelative代入步骤S16;Step S12: take all the three-dimensional point clouds {Pts all } that have been registered as the three-dimensional point cloud {Pts last } of the previous viewing angle, and take the relative pose RT relative updated in step S08 as the initial value of the relative pose, Execute steps S05 to S09, and substitute the updated relative pose RT relative into step S16 after completing the three-dimensional point cloud registration;

步骤S13:采用基于仿射不变性理论的最佳对应点点集查找方法,对当前视角的三维点云{Ptscur}的多元特征{Featurescur}和已完成配准的所有三维点云{Ptsall}对应的多元特征{Featuresall}进行最佳对应点点集查找,得到最佳对应点点集{KeyPairsfeature};Step S13: Using the search method for the best corresponding point set based on the affine invariance theory, the multi-component features {Features cur } of the three-dimensional point cloud {Pts cur } of the current viewing angle and all the three-dimensional point clouds {Pts all that have been registered } The corresponding multi-features {Features all } are searched for the best corresponding point set, and the best corresponding point set {KeyPairs feature } is obtained;

步骤S14:根据最佳对应点点集{KeyPairsfeature},计算当前视角的三维点云{Ptscur}与已完成配准的所有三维点云{Ptsall}之间的相对位姿RTrelative,并将计算出的相对位姿RTrelative作为粗配准结果;Step S14: According to the best corresponding point set {KeyPairs feature }, calculate the relative pose RT relative between the three-dimensional point cloud {Pts cur } of the current viewing angle and all the three-dimensional point clouds {Pts all } that have been registered, and set the relative pose RT relative . The calculated relative pose RT relative is used as the coarse registration result;

步骤S15:判断步骤S12到S14的点云配准是否配准成功;若配准成功,则进入步骤S16,否则进入步骤S17;Step S15: Determine whether the point cloud registration in steps S12 to S14 is successful; if the registration is successful, go to step S16, otherwise go to step S17;

步骤S16:将步骤S14中计算出的相对位姿RTrelative作用于已完成配准的所有三维点云{Ptsall},然后将当前视角的三维点云{Ptscur}加入至已完成配准的所有三维点云{Ptsall}中,并使用RTrelative更新上一关键帧的相对位姿;Step S16: apply the relative pose RT relative calculated in step S14 to all the three-dimensional point clouds {Pts all } that have been registered, and then add the three-dimensional point cloud {Pts cur } of the current viewing angle to the three-dimensional point cloud {Pts cur } that has been registered. In all 3D point clouds {Pts all }, and use RT relative to update the relative pose of the previous key frame;

步骤S17:丢弃当前视角的三维点云{Ptscur}数据;Step S17: discard the 3D point cloud {Pts cur } data of the current viewing angle;

步骤S18:判断是否完成三维数据采集;若未结束,则进入步骤S01,进行下一视角的三维点云数据的配准。Step S18: determine whether the three-dimensional data acquisition is completed; if not, then go to step S01 to perform the registration of the three-dimensional point cloud data of the next viewing angle.

根据一种具体的实施方式,本发明基于相机位姿估计的三维点云配准方法中,判断是否配准成功的方式为:判断对应点点集中所有对应点之间距离的均值与方差是否满足要求;或者,判断对应点集合中的对应点的特征相似性统计是否满足要求。According to a specific embodiment, in the three-dimensional point cloud registration method based on the camera pose estimation of the present invention, the method of judging whether the registration is successful is: judging whether the mean value and variance of the distances between all the corresponding points in the corresponding point set meet the requirements ; or, determine whether the feature similarity statistics of the corresponding points in the corresponding point set meet the requirements.

根据一种具体的实施方式,本发明基于相机位姿估计的三维点云配准方法中,所述三维建模图像为单帧或多帧的随机编码图像,或者条纹结构光图像。According to a specific embodiment, in the three-dimensional point cloud registration method based on camera pose estimation of the present invention, the three-dimensional modeling image is a single-frame or multi-frame random coded image, or a striped structured light image.

根据一种具体的实施方式,本发明基于相机位姿估计的三维点云配准方法中,所述三维建模图像为随机编码图像时,将平均子区域灰度波动作为可靠性评价指标,而得到所述可靠图;其中,According to a specific embodiment, in the three-dimensional point cloud registration method based on camera pose estimation of the present invention, when the three-dimensional modeling image is a randomly encoded image, the average sub-region grayscale fluctuation is used as the reliability evaluation index, and obtain the reliable map; wherein,

设平均子区域灰度波动为:Let the average sub-region grayscale fluctuation be:

Figure GDA0002432840150000061
Figure GDA0002432840150000061

其中,N(u,v,H,V)表示像素(u,v)周围Hx V大小的邻域,Sp(μ,ν)表示第p个子邻域的灰度波动;像素(u,v)的3邻域的子区域灰度波动S(μ,ν)为:Among them, N(u,v,H,V) represents the neighborhood of size Hx V around the pixel (u,v), Sp (μ,ν) represents the grayscale fluctuation of the p -th sub-neighborhood; pixel (u,v) ), the sub-region grayscale fluctuation S(μ,ν) of the 3-neighborhood is:

Figure GDA0002432840150000062
Figure GDA0002432840150000062

其中,Ii为第i张三维建模图像,N为完成一次三维建模单个相机所拍摄的三维建模图像数目,

Figure GDA0002432840150000063
为像素(u,v)所对应的所有三维建模图像的3邻域的灰度均值。Wherein, I i is the i-th three-dimensional modeling image, N is the number of three-dimensional modeling images captured by a single camera to complete a three-dimensional modeling,
Figure GDA0002432840150000063
is the gray mean of all 3-neighborhoods of all 3D modeling images corresponding to pixel (u, v).

根据一种具体的实施方式,本发明基于相机位姿估计的三维点云配准方法中,所述三维建模图像为条纹结构光三维建模时,将条纹的调制度作为可靠性评价指标,而得到所述可靠图;其中,第n幅条纹的调制度为:According to a specific embodiment, in the three-dimensional point cloud registration method based on camera pose estimation of the present invention, when the three-dimensional modeling image is the three-dimensional modeling of striped structured light, the modulation degree of the stripes is used as the reliability evaluation index, And the reliable map is obtained; wherein, the modulation degree of the nth fringe is:

Figure GDA0002432840150000064
Figure GDA0002432840150000064

其中,I0(x,y)为背景光强,C0(x,y)为条纹的对比度,N表示条纹结构光的相移步数。Among them, I 0 (x, y) is the background light intensity, C 0 (x, y) is the contrast of the stripes, and N is the number of phase shift steps of the stripe structured light.

根据一种具体的实施方式,本发明基于相机位姿估计的三维点云配准方法中,三维建模结果为:According to a specific embodiment, in the three-dimensional point cloud registration method based on camera pose estimation of the present invention, the three-dimensional modeling result is:

Pts3D=Calc3D(ImgVec,R,SysInfo)Pts3D=Calc3D(ImgVec,R,SysInfo)

其中,Calc3D为建模函数,ImgVec为建模图像,R为所述可靠图,SysInfo为系统参数。Among them, Calc3D is a modeling function, ImgVec is a modeling image, R is the reliability graph, and SysInfo is a system parameter.

根据一种具体的实施方式,本发明基于相机位姿估计的三维点云配准方法中,在所述步骤S04中,得到的多元特征为:According to a specific embodiment, in the three-dimensional point cloud registration method based on camera pose estimation of the present invention, in the step S04, the obtained multivariate features are:

Pts3Dfeature=feature(Pts3D,ImgVec,Radius,FeaID)Pts3D feature = feature(Pts3D,ImgVec,Radius,FeaID)

其中,Radius为计算像素(u,v)的特征时所选用的周围邻域的大小,FeaID为特征的组合形式,Pts3D为三维建模结果;ImgVec为建模图像。Among them, Radius is the size of the surrounding neighborhood selected when calculating the feature of the pixel (u, v), FeaID is the combined form of the feature, Pts3D is the 3D modeling result; ImgVec is the modeling image.

根据一种具体的实施方式,本发明基于相机位姿估计的三维点云配准方法中,在所述步骤S07中,基于相似性度量,对所述对应点点集{Pairs}进行筛选的方式为:保留相似性度量超过一定阈值的对应点点对,删除其余的对应点点对;其中,相似性度量为:According to a specific embodiment, in the three-dimensional point cloud registration method based on the camera pose estimation of the present invention, in the step S07, based on the similarity measure, the method of screening the corresponding point set {Pairs} is as follows: : keep the corresponding point pairs whose similarity measure exceeds a certain threshold, and delete the rest of the corresponding point pairs; among them, the similarity measure is:

SmilarityPair=w1SimilarityColor(Pair)+w2SimilarityGeometry(Pair)Smilarity Pair = w 1 Similarity Color (Pair)+w 2 Similarity Geometry (Pair)

其中,SimilarityColor(Pair)为对应点点对Pair(u,v)的颜色特征相似性度量值,SimilarityGeometry(Pair)为对应点点对Pair(u,v)的几何特征相似性度量值,w1、w2分别为颜色特征和几何特征的权重。Among them, Similarity Color (Pair) is the color feature similarity measure of the corresponding point-to-point pair Pair (u, v), Similarity Geometry (Pair) is the corresponding point-to-point pair Pair (u, v) The similarity measure of geometric features, w 1 , w 2 are the weights of color features and geometric features, respectively.

进一步地,颜色特征的相似性度量为颜色域中对应点点对Pair(u,v)的颜色特征的协方差;几何特征的相似性度量为空间域中对应点点对Pair(u,v)的几何特征的协方差。Further, the similarity measure of the color feature is the covariance of the color feature of the corresponding point-to-point pair Pair(u,v) in the color domain; the similarity measure of the geometric feature is the geometry of the corresponding point-to-point pair Pair(u,v) in the space domain. covariance of features.

与现有技术相比,本发明的有益效果:本发明的基于相机位姿估计的三维点云配准方法,通过将三维建模过程问题与三维点云自动配准问题进行有机结合,对多视角三维建模过程进行流程优化,从而将三维点云的自动配准问题转化为相邻视角下相机的相对位置姿态估计问题。而且,本发明与传统的点云配准方法相比,充分利用了相机的内参数来简化对应点点集的求取过程,提高了求取速度,同时根据三维建模过程中三维点的可靠性信息来对对应点点集进行筛选,从而有效避免“异常点”对三维点云匹配结果的影响,加快匹配过程的收敛速度并提高匹配精度。尤其,本发明特别适用于通过目标物体与三维数据获取装置的相对运动来获取多视角三维点云的应用,对于相邻角度差异较大以至于经典点云配准方法失效的情况下,仍具有很强的适应能力。Compared with the prior art, the beneficial effects of the present invention are: the three-dimensional point cloud registration method based on the camera pose estimation of the present invention, by organically combining the three-dimensional modeling process problem and the three-dimensional point cloud automatic registration problem, to many The process of 3D modeling of the perspective is optimized, so that the automatic registration of 3D point clouds can be transformed into the relative position and attitude estimation of cameras in adjacent perspectives. Moreover, compared with the traditional point cloud registration method, the present invention makes full use of the internal parameters of the camera to simplify the process of obtaining the corresponding point set, and improves the speed of obtaining. Information to filter the corresponding point set, so as to effectively avoid the influence of "abnormal points" on the 3D point cloud matching results, speed up the convergence speed of the matching process and improve the matching accuracy. In particular, the present invention is particularly suitable for the application of acquiring multi-view three-dimensional point clouds through the relative motion of the target object and the three-dimensional data acquisition device. In the case where the difference between adjacent angles is so large that the classical point cloud registration method fails, it still has the advantages of Strong adaptability.

附图说明:Description of drawings:

图1为本发明的流程示意图。FIG. 1 is a schematic flow chart of the present invention.

具体实施方式Detailed ways

下面结合试验例及具体实施方式对本发明作进一步的详细描述。但不应将此理解为本发明上述主题的范围仅限于以下的实施例,凡基于本发明内容所实现的技术均属于本发明的范围。The present invention will be further described in detail below in conjunction with test examples and specific embodiments. However, it should not be construed that the scope of the above-mentioned subject matter of the present invention is limited to the following embodiments, and all technologies realized based on the content of the present invention belong to the scope of the present invention.

如图1所示,本发明基于相机位姿估计的三维点云配准方法,其包括以下步骤:As shown in FIG. 1 , the three-dimensional point cloud registration method based on camera pose estimation of the present invention includes the following steps:

步骤S01:读取一组新的三维建模图像,而且,能够基于读取的三维建模图像完成三维建模;具体的,本发明在实施时,读取的三维建模图像为单帧或多帧的随机编码图像,或者条纹结构光图像。当然,本发明读取的三维建模图像还可以是其他形式的编码图像,只要能够在一定算法支撑下实现三维建模即可。Step S01: read a new set of three-dimensional modeling images, and can complete three-dimensional modeling based on the read three-dimensional modeling images; specifically, when the present invention is implemented, the read three-dimensional modeling images are a single frame or Randomly encoded images of multiple frames, or striped structured light images. Of course, the three-dimensional modeling image read by the present invention may also be other forms of encoded images, as long as the three-dimensional modeling can be realized under the support of a certain algorithm.

步骤S02:对三维建模图像进行可靠区域提取,得到可靠图。Step S02 : extracting a reliable region from the three-dimensional modeling image to obtain a reliable map.

具体的,三维建模图像为随机编码图像时,将平均子区域灰度波动作为可靠性评价指标,而得到可靠图;其中,Specifically, when the 3D modeling image is a randomly coded image, the average sub-region grayscale fluctuation is used as the reliability evaluation index to obtain the reliability map; wherein,

设平均子区域灰度波动为:Let the average sub-region grayscale fluctuation be:

Figure GDA0002432840150000091
Figure GDA0002432840150000091

其中,N(u,v,H,V)表示像素(u,v)周围Hx V大小的邻域,Sp(μ,ν)表示第p个子邻域的灰度波动;像素(u,v)的3邻域的子区域灰度波动S(μ,ν)为:Among them, N(u,v,H,V) represents the neighborhood of size Hx V around the pixel (u,v), Sp (μ,ν) represents the grayscale fluctuation of the p -th sub-neighborhood; pixel (u,v) ), the sub-region grayscale fluctuation S(μ,ν) of the 3-neighborhood is:

Figure GDA0002432840150000092
Figure GDA0002432840150000092

其中,Ii为第i张三维建模图像,N为完成一次三维建模单个相机所拍摄的三维建模图像数目,

Figure GDA0002432840150000093
为像素(u,v)所对应的所有三维建模图像的3邻域的灰度均值。当然,也可以选择其它尺度的邻域代替3邻域。Wherein, I i is the i-th three-dimensional modeling image, N is the number of three-dimensional modeling images captured by a single camera to complete a three-dimensional modeling,
Figure GDA0002432840150000093
is the gray mean of all 3-neighborhoods of all 3D modeling images corresponding to pixel (u, v). Of course, other scales of neighborhoods can also be selected to replace the 3-neighborhood.

具体的,三维建模图像为条纹结构光三维建模时,将条纹的调制度作为可靠性评价指标,而得到可靠图。例如,以条纹结构光N步相移为例,将一组相移条纹投影到被测物体表面,则物体表面第n幅相移条纹的强度分布可以表示为:Specifically, when the three-dimensional modeling image is the three-dimensional modeling of striped structured light, the modulation degree of the stripes is used as the reliability evaluation index, and the reliability map is obtained. For example, taking N-step phase shift of fringe structured light as an example, a group of phase-shift fringes are projected onto the surface of the object to be measured, and the intensity distribution of the n-th phase-shift fringe on the surface of the object can be expressed as:

In(x,y)=I0(x,y){1+C0(x,y)cos[Φ(x,y)+2πn/N]}I n (x,y)=I 0 (x,y){1+C 0 (x,y)cos[Φ(x,y)+2πn/N]}

其中,I0(x,y)为背景光强,C0(x,y)为条纹的对比度,Φ(x,y)为条纹的相位分布,那么条纹的调制度表示为Among them, I 0 (x, y) is the background light intensity, C 0 (x, y) is the contrast of the fringes, Φ(x, y) is the phase distribution of the fringes, then the modulation degree of the fringes is expressed as

Figure GDA0002432840150000094
Figure GDA0002432840150000094

化简可得Simplified available

Figure GDA0002432840150000095
Figure GDA0002432840150000095

由此可见,条纹的调制度M(x,y)与I0(x,y)C0(x,y)成正比关系,二者之间只相差一个比例因子

Figure GDA0002432840150000096
可以作为三维建模图像的可靠性评价标准。It can be seen that the modulation degree M(x,y) of the stripes is proportional to I 0 (x,y)C 0 (x,y), and there is only a difference between the two scale factors
Figure GDA0002432840150000096
It can be used as a reliability evaluation standard for 3D modeling images.

在具体实施时,当可靠性指标大于某一阈值时,则判断像素(u,v)对应的三维建模结果是可信的,否则视为不可信并置0。设定一阈值即可排除三维建模图像中的不可靠区域,从而将可信的有效像素点形成可有效进行三维建模的有效区域。During specific implementation, when the reliability index is greater than a certain threshold, it is judged that the three-dimensional modeling result corresponding to the pixel (u, v) is credible; otherwise, it is regarded as not credible and set to 0. Setting a threshold value can exclude unreliable areas in the 3D modeling image, so that credible effective pixels can be formed into an effective area that can effectively perform 3D modeling.

步骤S03:根据可靠图以及标定好的测量系统参数信息,计算出可靠区域内每一点像素点(u,v)对应的三维坐标Ptsu,v,并由计算出的所有三维坐标Ptsu,v形成当前视角的三维点云{Ptscur}。Step S03: According to the reliable map and the calibrated measurement system parameter information, calculate the three-dimensional coordinates Pts u,v corresponding to each pixel point (u,v) in the reliable area, and calculate all the three-dimensional coordinates Pts u,v from the calculated three-dimensional coordinates Pts u,v Forms a 3D point cloud {Pts cur } for the current viewing angle.

具体的,为便于后续三维点云配准过程中通过数据并行从而提高计算速度,该三维点云数据的组织形式可类比于彩色图像,为大小与输入三维建模图像行列相同的矩形,每个数据元素为由x、y、z三个数据构成三维空间坐标。Specifically, in order to facilitate the data parallelism in the subsequent 3D point cloud registration process to improve the calculation speed, the organization form of the 3D point cloud data can be analogous to a color image, which is a rectangle with the same size as the input 3D modeling image. The data element is a three-dimensional space coordinate composed of three data of x, y, and z.

三维建模结果为:The 3D modeling results are:

Pts3D=Calc3D(ImgVec,R,SysInfo)Pts3D=Calc3D(ImgVec,R,SysInfo)

其中,Calc3D为建模函数,ImgVec为建模图像,R为可靠图,SysInfo为系统参数。Among them, Calc3D is the modeling function, ImgVec is the modeling image, R is the reliability graph, and SysInfo is the system parameter.

步骤S04:对当前视角的三维点云{Ptscur}进行深度信息特征提取和进行颜色信息特征提取,得到由空间域特征和颜色域特征构成的多元特征{Featurescur}。Step S04: Perform depth information feature extraction and color information feature extraction on the three-dimensional point cloud {Pts cur } of the current viewing angle to obtain a multi-dimensional feature {Features cur } composed of spatial domain features and color domain features.

在实施过程中,为了实现并行加速,可根据实际的测量情况来选择对各种特征进行组合。比如深度信息特征在空间域由法线、空间邻域的法线统计分布、空间邻域内点的拓扑结构、空间邻域内点的距离统计分布、空间邻域内点的色度统计分布等构成,也可以是以上多个几何特征的加权,其中,参与特征统计的点可以为空间邻域内所有点,也可以为空间邻域内部分点。空间邻域内部分点的遴选方案可以有多种选择,如以“十”字型、“米”字型方案选取、随机选取等。In the implementation process, in order to achieve parallel acceleration, various features can be selected and combined according to the actual measurement situation. For example, the depth information feature in the spatial domain is composed of normals, the normal statistical distribution of the spatial neighborhood, the topological structure of the points in the spatial neighborhood, the distance statistical distribution of the points in the spatial neighborhood, and the chromaticity statistical distribution of the points in the spatial neighborhood. It can be the weighting of the above multiple geometric features, wherein the points participating in the feature statistics can be all the points in the spatial neighborhood, or can be some points in the spatial neighborhood. The selection scheme of some points in the spatial neighborhood can be selected in various ways, such as selection in the "cross" type, "m" type scheme, random selection, and so on.

而在进行颜色特征提取时,首先将携带颜色信息的图像转换到YUV或HSV颜色空间,颜色特征不考虑亮度信息从而排除因拍摄视角和物体的面形引起的亮度变化因素。When extracting color features, the image carrying color information is first converted to the YUV or HSV color space, and the color features do not consider the brightness information so as to exclude the brightness change factors caused by the shooting angle of view and the surface shape of the object.

具体的,多元特征为:Specifically, the multivariate features are:

Pts3Dfeature=feature(Pts3D,ImgVec,Radius,FeaID)Pts3D feature = feature(Pts3D,ImgVec,Radius,FeaID)

其中,Radius为计算像素(u,v)的特征时所选用的周围邻域的大小,FeaID为特征的组合形式,Pts3D为三维建模结果;ImgVec为建模图像。特征的组合形式为单独的颜色信息特征或深度信息特征,或者颜色信息特征和深度信息特征。Among them, Radius is the size of the surrounding neighborhood selected when calculating the feature of the pixel (u, v), FeaID is the combined form of the feature, Pts3D is the 3D modeling result; ImgVec is the modeling image. The combined form of the features is a single color information feature or a depth information feature, or a color information feature and a depth information feature.

步骤S05:采用相机位置姿态的变换来表示被测目标与相机件的相对运动,并初始化上一视角与当前视角相机间的相对位姿RTrelative,其中位移参数T为0,姿态参数R为单位阵。Step S05: use the transformation of the position and attitude of the camera to represent the relative motion of the object under test and the camera device, and initialize the relative pose RT relative between the camera of the previous viewing angle and the current viewing angle, wherein the displacement parameter T is 0, and the attitude parameter R is the unit array.

步骤S06:将上一视角转到当前视角,并根据相机的内参数和相对位姿,计算出上一视角的三维点云{Ptslast}中的每一个三维数据点对应的像素坐标(u',v'),并在当前三维建模图像的可靠区域内查找像素坐标(u',v')欧氏距离最小的像素坐标(u,v)nearest,以上一视角的三维点云{Ptslast}中像素坐标(u',v')所对应的三维点与{Ptscur}中像素坐标(u,v)nearest所对应的三维点构成一个对应点点对Pair(u,v),并以所有对应点点对构成对应点点集{Pairs}。Step S06: Turn the previous viewing angle to the current viewing angle, and calculate the pixel coordinates (u′ corresponding to each three-dimensional data point in the three-dimensional point cloud {Pts last } of the previous viewing angle according to the internal parameters and relative pose of the camera. ,v'), and find the pixel coordinates (u,v) nearest with the pixel coordinates (u',v') with the smallest Euclidean distance in the reliable area of the current 3D modeling image, and the 3D point cloud from the above perspective {Pts last } The three-dimensional point corresponding to the pixel coordinates (u', v') in {Pts cur } and the three-dimensional point corresponding to the pixel coordinates (u, v) nearest in {Pts cur } form a corresponding point-to-point pair Pair (u, v), and use all The corresponding point-to-point pairs constitute the corresponding point-to-point set {Pairs}.

具体的,根据相机此刻的外参数和事件标定过的相机内参数,将上一视角测得的三维点云数据{Ptslast}中的每个有效点根据相机模型以成像的方式投影到当前视角下,从得到每个有效点在当前视角下的成像像素坐标(u',v')。步骤S03中的当前视角三维点云的组织形式为矩阵形式。Specifically, according to the external parameters of the camera at the moment and the internal parameters of the camera calibrated by the event, each valid point in the 3D point cloud data {Pts last } measured from the previous perspective is projected to the current perspective in an imaging manner according to the camera model. Next, obtain the imaging pixel coordinates (u', v') of each valid point under the current viewing angle. The organization form of the three-dimensional point cloud of the current viewing angle in step S03 is a matrix form.

相机模型和依据相机模型将三维点通过透视投影成像于像素平面坐标系为行业领域基础知识,本专利不再赘述。The camera model and the imaging of three-dimensional points on the pixel plane coordinate system through perspective projection according to the camera model are basic knowledge in the field of the industry, which will not be repeated in this patent.

步骤S07:基于相似性度量,对对应点点集{Pairs}进行筛选,得到可信对应点点集{Pairsrelability}。测量过程中存在阴影和遮挡,这些区域中形成的对应点点对是不可靠的。因此,对对应点点集{Pairs}进行筛选的方式为:保留相似性度量超过一定阈值的对应点点对,删除其余的对应点点对;其中,相似性度量为:Step S07: Based on the similarity measure, filter the corresponding point set {Pairs} to obtain a credible corresponding point set {Pairs relability }. There are shadows and occlusions in the measurement process, and the corresponding point-to-point pairs formed in these areas are unreliable. Therefore, the method of filtering the corresponding point set {Pairs} is: retain the corresponding point pairs whose similarity measure exceeds a certain threshold, and delete the remaining corresponding point pairs; wherein, the similarity measure is:

SmilarityPair=w1SimilarityColor(Pair)+w2SimilarityGeometry(Pair)Smilarity Pair = w 1 Similarity Color (Pair)+w 2 Similarity Geometry (Pair)

其中,SimilarityColor(Pair)为对应点对Pair(u,v)的颜色特征相似性度量值,SimilarityGeometry(Pair)为对应点对Pair(u,v)的几何特征相似性度量值,w1、w2分别为颜色特征和几何特征的权重。Among them, Similarity Color (Pair) is the color feature similarity measure of the corresponding point pair Pair (u, v), Similarity Geometry (Pair) is the corresponding point pair Pair (u, v) The similarity measure of geometric features, w 1 , w 2 are the weights of color features and geometric features, respectively.

进一步地,颜色特征的相似性度量为颜色域中对应点点对Pair(u,v)的颜色特征的协方差;几何特征的相似性度量为空间域中对应点点对Pair(u,v)的几何特征的协方差。Further, the similarity measure of the color feature is the covariance of the color feature of the corresponding point-to-point pair Pair(u,v) in the color domain; the similarity measure of the geometric feature is the geometry of the corresponding point-to-point pair Pair(u,v) in the space domain. covariance of features.

步骤S08:基于可信对应点点集{Pairsrelability},计算对应点之间的旋转平移矩阵RTaccmu,将RTaccmu通过矩阵运算作用于上一视角与当前视角相机间的相对位姿RTrelative,并使用该计算结果更新相对位姿RTrelativeStep S08: Calculate the rotation and translation matrix RT accmu between the corresponding points based on the credible corresponding point set {Pairs relability }, and apply the RT accmu to the relative pose RT relative between the camera of the previous view and the current view through matrix operations, and Use the result of this calculation to update the relative pose RT relative .

步骤S09:判断是否满足迭代终止条件;若满足,则进入步骤10,否则重新进入步骤S06。具体的,终止条件包括迭代次数以及迭代过程的收敛状态判断等,其中,迭代过程收敛状态指相邻两次迭代求取的矩阵差别,迭代终止条件的设定属于现有技术,本专利不再赘述。Step S09: determine whether the iteration termination condition is satisfied; if it is satisfied, go to Step 10, otherwise, go to Step S06 again. Specifically, the termination condition includes the number of iterations and the judgment of the convergence state of the iterative process, among which, the convergence state of the iterative process refers to the matrix difference obtained by two adjacent iterations, and the setting of the iteration termination condition belongs to the prior art, and this patent does not Repeat.

步骤S10:根据步骤S09迭代终止的具体原因,判断当前视角的三维点云{Ptscur}是否是关键帧,若是关键帧,则进入步骤11,若不是关键帧,则进入步骤S16;Step S10: According to the specific reason for the iterative termination in step S09, determine whether the three-dimensional point cloud {Pts cur } of the current viewing angle is a key frame, if it is a key frame, then go to step 11, if it is not a key frame, then go to step S16;

其中,判断当前视角的三维点云{Ptscur}是否为关键帧的方式为:Among them, the method of judging whether the three-dimensional point cloud {Pts cur } of the current viewing angle is a key frame is as follows:

(1)若当前视角的三维测量为首次测量,则当前视角的三维点云默认为关键帧;(1) If the 3D measurement of the current perspective is the first measurement, the 3D point cloud of the current perspective is the key frame by default;

(2)若当前视角的三维点云{Ptscur}与上一视角的三维点云{Ptslast}配准成功,且当前视角的三维测量与上一关键帧的三维测量已间隔NKeyFrameInterval次三维测量或当前视角的三维测量的相对位姿RTrelative与上一关键帧的三维点云的相对位姿超过一定阈值,则当前视角的三维点云{Ptscur}为第一类关键帧;(2) If the 3D point cloud {Pts cur } of the current perspective is successfully registered with the 3D point cloud {Pts last } of the previous perspective, and the 3D measurement of the current perspective and the 3D measurement of the previous key frame have been separated by N KeyFrameInterval times 3D If the relative pose RT relative of the measurement or the 3D measurement of the current perspective and the relative pose of the 3D point cloud of the previous key frame exceeds a certain threshold, then the 3D point cloud {Pts cur } of the current perspective is the first type of key frame;

(3)若当前视角的三维点云{Ptscur}与上一视角的三维点云{Ptslast}配准不成功,则当前视角三维点云为第二类关键帧。(3) If the registration of the 3D point cloud {Pts cur } of the current perspective and the 3D point cloud {Pts last } of the previous perspective is unsuccessful, the 3D point cloud of the current perspective is the second type of key frame.

步骤S11:判断当前关键帧的类型,若当前关键帧为第一类关键帧,则进入步骤S12,若当前关键帧为第二类关键帧,则进入步骤S13。Step S11: Determine the type of the current key frame, if the current key frame is the first type of key frame, then go to step S12, if the current key frame is the second type of key frame, then go to step S13.

其中,当前关键帧的类型的判断方式为:若当前视角三维点云与上一视角三维点云配准成功,且当前关键帧与上一关键帧至少间隔NKeyFrameInterval帧或当前视角与上一关键帧的视角的相对位置TKeyFrame和姿态NKeyFrameRotate超过一定阈值,则当前关键帧为第一类关键帧;若当前视角三维点云与上一视角三维点云配准不成功,则当前关键帧为第二类关键帧。Among them, the judgment method of the type of the current key frame is: if the registration of the 3D point cloud of the current perspective and the 3D point cloud of the previous perspective is successful, and the current key frame and the previous key frame are at least N KeyFrameInterval frames or the current perspective and the previous key frame. If the relative position T KeyFrame and attitude N KeyFrameRotate of the frame's perspective exceed a certain threshold, the current key frame is the first type of key frame; if the registration of the 3D point cloud of the current view and the 3D point cloud of the previous view is unsuccessful, the current key frame is The second type of keyframe.

步骤S12:将已完成配准的所有三维点云{Ptsall}作为上一视角的三维点云{Ptslast},并以经过步骤S08更新后的相对位姿RTrelative为相对位姿初始值,执行步骤S05至步骤S09,并完成三维点云配准后将更新后的相对位姿RTrelative代入步骤S16。Step S12: take all the three-dimensional point clouds {Pts all } that have been registered as the three-dimensional point cloud {Pts last } of the previous viewing angle, and take the relative pose RT relative updated in step S08 as the initial value of the relative pose, Steps S05 to S09 are executed, and after the three-dimensional point cloud registration is completed, the updated relative pose RT relative is substituted into step S16.

步骤S13:采用基于仿射不变性理论的最佳对应点点集查找方法,对当前视角的三维点云{Ptscur}的多元特征{Featurescur}和已完成配准的所有三维点云{Ptsall}对应的多元特征{Featuresall}进行最佳对应点点集查找,得到最佳对应点点集{KeyPairsfeature}。Step S13: Using the search method for the best corresponding point set based on the affine invariance theory, the multi-component features {Features cur } of the three-dimensional point cloud {Pts cur } of the current viewing angle and all the three-dimensional point clouds {Pts all that have been registered } The corresponding multi-features {Features all } are searched for the best corresponding point set, and the best corresponding point set {KeyPairs feature } is obtained.

步骤S14:根据最佳对应点点集{KeyPairsfeature},计算当前视角的三维点云{Ptscur}与已完成配准的所有三维点云{Ptsall}之间的相对位姿RTrelative,并将计算出的相对位姿RTrelative作为粗配准结果。Step S14: According to the best corresponding point set {KeyPairs feature }, calculate the relative pose RT relative between the three-dimensional point cloud {Pts cur } of the current viewing angle and all the three-dimensional point clouds {Pts all } that have been registered, and set the relative pose RT relative . The calculated relative pose RT relative is used as the coarse registration result.

步骤S15:判断步骤S12到S14的点云配准是否配准成功;若配准成功,则进入步骤S16,否则进入步骤S17。Step S15: Determine whether the point cloud registration in steps S12 to S14 is successful; if the registration is successful, go to step S16, otherwise go to step S17.

步骤S16:将步骤S14中计算出的相对位姿RTrelative作用于已完成配准的所有三维点云{Ptsall},然后将当前视角的三维点云{Ptscur}加入至已完成配准的所有三维点云{Ptsall}中,并使用RTrelative更新上一关键帧的相对位姿;该相对位姿为上一关键帧相对于新加入的当前视角的三维点云{Ptscur}的位姿。Step S16: apply the relative pose RT relative calculated in step S14 to all the three-dimensional point clouds {Pts all } that have been registered, and then add the three-dimensional point cloud {Pts cur } of the current viewing angle to the three-dimensional point cloud {Pts cur } that has been registered. All 3D point clouds {Pts all }, and use RT relative to update the relative pose of the previous key frame; the relative pose is the position of the previous key frame relative to the newly added 3D point cloud {Pts cur } of the current view. posture.

步骤S17:丢弃当前视角的三维点云{Ptscur}数据。Step S17: Discard the 3D point cloud {Pts cur } data of the current viewing angle.

步骤S18:判断是否完成三维数据采集;若未结束,则进入步骤S01,进行下一视角的三维点云数据的配准。Step S18: determine whether the three-dimensional data acquisition is completed; if not, then go to step S01 to perform the registration of the three-dimensional point cloud data of the next viewing angle.

在本发明中,判断是否配准成功的方式为:判断对应点集合的对应点距离的均值与方差是否满足要求;或者,判断对应点集合的对应点的特征相似性统计是否满足要求。In the present invention, the method of judging whether the registration is successful is: judging whether the mean value and variance of the corresponding point distances of the corresponding point set meet the requirements; or, judging whether the feature similarity statistics of the corresponding points in the corresponding point set meet the requirements.

本发明基于相机位姿估计的三维点云配准方法,通过将三维建模过程问题与三维点云自动配准问题进行有机结合,对多视角三维建模过程进行流程优化,从而将三维点云的自动配准问题转化为相邻视角下相机的相对位置姿态估计问题。而且,本发明与传统的点云配准方法相比,充分利用了相机的内参数来简化对应点点集的求取过程,提高了求取速度,同时根据三维建模过程中三维点的可靠性信息来对对应点点集进行筛选,从而有效避免“异常点”对三维点云匹配结果的影响,加快匹配过程的收敛速度并提高匹配精度。尤其,本发明特别适用于通过目标物体与三维数据获取装置的相对运动来获取多视角三维点云的应用,对于相邻角度差异较大以至于经典点云配准方法失效的情况下,仍具有很强的适应能力。The present invention is based on the three-dimensional point cloud registration method based on the camera pose estimation. By organically combining the three-dimensional modeling process problem and the three-dimensional point cloud automatic registration problem, the process of the multi-view three-dimensional modeling process is optimized, so that the three-dimensional point cloud The problem of automatic registration is transformed into the problem of relative position and pose estimation of cameras from adjacent viewpoints. Moreover, compared with the traditional point cloud registration method, the present invention makes full use of the internal parameters of the camera to simplify the process of obtaining the corresponding point set, and improves the speed of obtaining. Information to filter the corresponding point set, so as to effectively avoid the influence of "abnormal points" on the 3D point cloud matching results, speed up the convergence speed of the matching process and improve the matching accuracy. In particular, the present invention is particularly suitable for the application of acquiring multi-view three-dimensional point clouds through the relative motion of the target object and the three-dimensional data acquisition device. In the case where the difference between adjacent angles is so large that the classical point cloud registration method fails, it still has the advantages of Strong adaptability.

Claims (9)

1. A three-dimensional point cloud registration method based on camera pose estimation is characterized by comprising the following steps:
step S01: reading a new set of three-dimensional modeling images, and performing three-dimensional modeling based on the read three-dimensional modeling images;
step S02: extracting a reliable region of the three-dimensional modeling image to obtain a reliable image;
step S03: according to the reliable graph and the calibrated parameter information of the measuring system, calculating the three-dimensional coordinates Pts corresponding to each point pixel point (u, v) in the reliable areau,vAnd from all said three-dimensional coordinates Pts calculatedu,vThree-dimensional point cloud { Pts) forming current view anglecur};
Step S04: three-dimensional point cloud { Pts) of current view anglecurCarrying out depth information characteristic extraction and color information characteristic extraction to obtain multivariate characteristics { Features formed by space domain characteristics and color domain characteristicscur};
Step S05: the relative movement of the measured target and the camera piece is expressed by adopting the transformation of the position and the posture of the camera, and the relative pose RT between the camera at the previous visual angle and the camera at the current visual angle is initializedrelativeWherein the displacement parameter T is 0, and the attitude parameter R is a unit matrix;
step S06: the previous visual angle is converted to the current visual angle, and the three-dimensional point cloud { Pts) of the previous visual angle is calculated according to the internal parameters and the relative pose of the cameralastAnd finding the pixel coordinate (u, v) with the minimum Euclidean distance in the reliable area of the current three-dimensional modeling image according to the pixel coordinate (u ', v') corresponding to each three-dimensional data point in the three-dimensional modeling imagenearestThree-dimensional point cloud { Pts) from the above perspectivelastThe three-dimensional point corresponding to the pixel coordinate (u ', v') in (f) and (Pts)curCoordinates of middle pixel (u, v)nearestThe corresponding three-dimensional points form a corresponding point Pair Pair (u, v), and all the corresponding point Pairs form a corresponding point set { Pairs };
step S07: based on similarity measurement, screening the corresponding point set { Pairs } to obtain a credible corresponding point set { Pairs }relability};
Step S08: based on the set of credible corresponding points { PairsrelabilityAnd calculating a rotation translation matrix RT between corresponding pointsaccmuThe RT isaccmuActing on relative pose RT between previous visual angle and current visual angle camera through matrix operationrelativeAnd updating the relative pose RT using the calculation resultrelative
Step S09: judging whether an iteration termination condition is met; if yes, go to step 10, otherwise, go to step S06 again;
step S10: according to the specific reason of the iteration termination of the step S09, the three-dimensional point cloud { Pts) of the current view angle is judgedcurJudging whether the frame is a key frame, if so, entering the step 11, otherwise, entering the step S16;
wherein, the three-dimensional point cloud { Pts of the current view angle is judgedcurThe way of judging whether the key frame is as follows:
(1) if the three-dimensional measurement of the current visual angle is the first measurement, the default of the three-dimensional point cloud of the current visual angle is the key frame;
(2) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastRegistering successfully, and the three-dimensional measurement of the current visual angle is separated from the three-dimensional measurement of the previous key frame by NKeyFrameIntervalRelative pose RT of secondary three-dimensional measurement or three-dimensional measurement of current view anglerelativeIf the relative pose of the three-dimensional point cloud of the previous key frame exceeds a certain threshold, the three-dimensional point cloud { Pts) of the current view anglecurThe key frames are of a first type;
(3) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastIf the registration is unsuccessful, the current view three-dimensional point cloud is a second type key frame;
step S11: judging the type of the current key frame, if the current key frame is the first type key frame, entering the step S12, if the current key frame is the second type key frame, entering the step S13;
step S12: all three-dimensional point clouds { Pts with completed registrationallAs the three-dimensional point cloud { Pts) of the previous view anglelastAnd at the relative pose RT updated through step S08relativeExecuting the steps S05 to S09 for the initial value of the relative pose, and finishing the registration of the three-dimensional point cloud and then updating the relative pose RTrelativeStep S16 is substituted;
step S13: adopting an optimal corresponding point set searching method based on affine invariance theory to carry out three-dimensional point cloud { Pts) of the current view anglecurMultivariate Features of { Features }curAnd all three-dimensional point clouds (Pts) with completed registrationallCorresponding multivariate characteristics { Features }allFinding the optimal corresponding point set to obtain the optimal corresponding point set { KeyPairs }feature};
Step S14: according to the optimal corresponding point set { KeyPairsfeatureAnd calculating three-dimensional point cloud { Pts) of the current view anglecurWith all three-dimensional point clouds { Pts) having completed registrationallRelative pose RT betweenrelativeAnd calculating the relative pose RTrelativeAs a result of the coarse registration;
step S15: judging whether the point cloud registration of the steps S12 to S14 is successful; if the registration is successful, the step S16 is entered, otherwise, the step S17 is entered;
step S16: the relative pose RT calculated in the step S14 is compared withrelativeActing on all three-dimensional point clouds { Pts) with completed registrationallThen three-dimensional point cloud { Pts) of the current view anglecurAdd to all three-dimensional point clouds { Pts) that have completed registrationallIn (b), and use RTrelativeUpdating the relative pose of the last key frame;
step S17: discarding the three-dimensional point cloud { Pts) of the current view anglecurData;
step S18: judging whether three-dimensional data acquisition is finished or not; if not, the process proceeds to step S01, and registration of the three-dimensional point cloud data of the next view angle is performed.
2. The three-dimensional point cloud registration method based on camera pose estimation according to claim 1, wherein the mode for judging whether the registration is successful is as follows: judging whether the mean value and the variance of the distances between all corresponding points in the corresponding point set meet the requirements or not; or judging whether the feature similarity statistics of the corresponding points in the corresponding point set meets the requirement or not.
3. The camera pose estimation-based three-dimensional point cloud registration method according to claim 1, wherein the three-dimensional modeling image is a single-frame or multi-frame randomly coded image or a striped structured light image.
4. The three-dimensional point cloud registration method based on camera pose estimation according to claim 3, wherein when the three-dimensional modeling image is a random coding image, the reliability map is obtained by taking average sub-region gray level fluctuation as a reliability evaluation index; wherein,
let the average subregion grayscale fluctuation be:
Figure FDA0002432840140000041
where N (u, V, H, V) represents a neighborhood of H x V around the pixel (u, V), Sp(mu, ν) represents the grayscale fluctuation of the p-th sub-neighborhood; the sub-region grayscale fluctuation S (μ, ν) of the 3 neighborhood of the pixel (u, v) is:
Figure FDA0002432840140000042
wherein, IiFor the ith three-dimensional modeling image, N is the number of three-dimensional modeling images taken by a single camera to complete one three-dimensional modeling,
Figure FDA0002432840140000043
for all three-dimensional modelled images to which pixel (u, v) correspondsThe gray level mean of 3 neighborhoods.
5. The three-dimensional point cloud registration method based on camera pose estimation according to claim 3, wherein when the three-dimensional modeling image is a fringe structured light three-dimensional modeling, the modulation degree of the fringes is used as a reliability evaluation index to obtain the reliable image; the modulation degree of the nth stripe is as follows:
Figure FDA0002432840140000044
wherein, I0(x, y) is background light intensity, C0(x, y) is the contrast of the fringes, and N represents the number of phase-shift steps of the structured light of the fringes.
6. The camera pose estimation-based three-dimensional point cloud registration method according to claim 4 or 5, wherein the three-dimensional modeling result is:
Pts3D=Calc3D(ImgVec,R,SysInfo)
wherein Calc3D is a modeling function, ImgVec is a modeling image, R is the reliability map, and SysInfo is a system parameter.
7. The three-dimensional point cloud registration method based on camera pose estimation according to claim 4 or 5, wherein in the step S04, the obtained multivariate features are:
Pts3Dfeature=feature(Pts3D,ImgVec,Radius,FeaID)
wherein Radius is the size of the selected surrounding neighborhood when calculating the characteristics of the pixel (u, v), FeaID is the combination form of the characteristics, and Pts3D is the three-dimensional modeling result; ImgVec is the modeled image.
8. The three-dimensional point cloud registration method based on camera pose estimation according to claim 1, wherein in the step S07, based on the similarity measure, the corresponding point set { Pairs } is screened by: corresponding point pairs with similarity measurement exceeding a certain threshold value are reserved, and the rest corresponding point pairs are deleted; wherein the similarity measure is:
SmilarityPair=w1SimilarityColor(Pair)+w2SimilarityGeometry(Pair)
wherein, the SimilarityColor(Pair) is a color feature Similarity metric, Similarity, for Pair (u, v) for the corresponding point PairGeometry(Pair) is a geometric feature similarity metric, w, for Pair (u, v) for the corresponding point Pair1、w2The weights of the color feature and the geometric feature, respectively.
9. The camera pose estimation based three-dimensional point cloud registration method of claim 8, wherein the similarity measure of color features is the covariance of the color features of the corresponding point Pair Pair (u, v) in the color domain; the similarity measure of the geometric features is the covariance of the geometric features of the corresponding point Pair Pair (u, v) in the spatial domain.
CN201811400144.0A 2018-11-22 2018-11-22 A 3D Point Cloud Registration Method Based on Camera Pose Estimation Expired - Fee Related CN109544599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811400144.0A CN109544599B (en) 2018-11-22 2018-11-22 A 3D Point Cloud Registration Method Based on Camera Pose Estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811400144.0A CN109544599B (en) 2018-11-22 2018-11-22 A 3D Point Cloud Registration Method Based on Camera Pose Estimation

Publications (2)

Publication Number Publication Date
CN109544599A CN109544599A (en) 2019-03-29
CN109544599B true CN109544599B (en) 2020-06-23

Family

ID=65850002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811400144.0A Expired - Fee Related CN109544599B (en) 2018-11-22 2018-11-22 A 3D Point Cloud Registration Method Based on Camera Pose Estimation

Country Status (1)

Country Link
CN (1) CN109544599B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188583B (en) * 2023-04-23 2023-07-14 禾多科技(北京)有限公司 Method, device, equipment and computer readable medium for generating camera pose information

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310331B (en) * 2019-06-18 2023-04-14 哈尔滨工程大学 A Pose Estimation Method Based on the Combination of Line Features and Point Cloud Features
CN110675440B (en) * 2019-09-27 2022-07-12 深圳市易尚展示股份有限公司 Confidence evaluation method and device for three-dimensional depth data and computer equipment
CN111476841B (en) * 2020-03-04 2020-12-29 哈尔滨工业大学 A method and system for recognition and positioning based on point cloud and image
CN114332214B (en) * 2020-09-29 2024-12-06 北京三星通信技术研究有限公司 Object posture estimation method, device, electronic device and storage medium
CN112710318B (en) * 2020-12-14 2024-05-17 深圳市商汤科技有限公司 Map generation method, path planning method, electronic device, and storage medium
CN112880562A (en) * 2021-01-19 2021-06-01 佛山职业技术学院 Method and system for measuring pose error of tail end of mechanical arm
CN113112532B (en) * 2021-04-13 2023-04-21 中山大学 Real-time registration method for multi-TOF camera system
CN113409367B (en) * 2021-07-08 2023-08-18 西安交通大学 Stripe projection measurement point cloud point-by-point weighting registration method, equipment and medium
CN117017493A (en) * 2021-09-14 2023-11-10 武汉联影智融医疗科技有限公司 Method and device for determining sleeve pose of surgical robot system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
JP2018063236A (en) * 2016-10-13 2018-04-19 バイドゥ ネットコム サイエンス アンド テクノロジー(ペキン) カンパニー リミテッド Method and apparatus for annotating point cloud data
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976353B (en) * 2016-04-14 2020-01-24 南京理工大学 Spatial non-cooperative target pose estimation method based on global matching of model and point cloud

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018063236A (en) * 2016-10-13 2018-04-19 バイドゥ ネットコム サイエンス アンド テクノロジー(ペキン) カンパニー リミテッド Method and apparatus for annotating point cloud data
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A fast and robust local descriptor for 3D point cloud registration;Jiaqi Yang et al.;《Journal of LATEX Templates》;20170912;第1-21页 *
大型工件位姿估计中的稀疏点云配准方法;姜德涛 等;《北京信息科技大学学报》;20170228;第27卷(第1期);第89-94页 *
应用摄像机位姿估计的点云初始配准;郭清达 等;《光学精密工程》;20170630;第25卷(第6期);第1635-1644页 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188583B (en) * 2023-04-23 2023-07-14 禾多科技(北京)有限公司 Method, device, equipment and computer readable medium for generating camera pose information

Also Published As

Publication number Publication date
CN109544599A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109544599B (en) A 3D Point Cloud Registration Method Based on Camera Pose Estimation
CN111795704B (en) Method and device for constructing visual point cloud map
Zhang et al. Reference pose generation for long-term visual localization via learned features and view synthesis
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
EP3382644B1 (en) Method for 3d modelling based on structure from motion processing of sparse 2d images
CN108921895B (en) Sensor relative pose estimation method
Liu Improving ICP with easy implementation for free-form surface matching
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
CN113177977A (en) Non-contact three-dimensional human body size measuring method
CN102236794A (en) Recognition and pose determination of 3D objects in 3D scenes
Navarro et al. Robust and dense depth estimation for light field images
CN113298934A (en) Monocular visual image three-dimensional reconstruction method and system based on bidirectional matching
CN108921864A (en) A kind of Light stripes center extraction method and device
Pound et al. A patch-based approach to 3D plant shoot phenotyping
Kroemer et al. Point cloud completion using extrusions
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN116935013B (en) Method and system for large-scale splicing of circuit board point clouds based on three-dimensional reconstruction
CN112146647B (en) Binocular vision positioning method and chip for ground texture
JP2018195070A (en) Information processing apparatus, information processing method, and program
Sølund et al. A large-scale 3D object recognition dataset
CN106683125A (en) RGB-D image registration method based on 2D/3D mode switching
CN113340201B (en) Three-dimensional measurement method based on RGBD camera
CN111415378A (en) Image registration method for automobile glass detection and automobile glass detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200623

Termination date: 20201122