WO2021114026A1 - 一种基于局部参考坐标系的3d形状匹配方法及装置 - Google Patents
一种基于局部参考坐标系的3d形状匹配方法及装置 Download PDFInfo
- Publication number
- WO2021114026A1 WO2021114026A1 PCT/CN2019/124037 CN2019124037W WO2021114026A1 WO 2021114026 A1 WO2021114026 A1 WO 2021114026A1 CN 2019124037 W CN2019124037 W CN 2019124037W WO 2021114026 A1 WO2021114026 A1 WO 2021114026A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- parameter
- axis
- reference coordinate
- feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000009466 transformation Effects 0.000 claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 12
- 238000000354 decomposition reaction Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Definitions
- This application belongs to the field of 3D shape matching, and in particular relates to a 3D shape matching method and device based on a local reference coordinate system.
- 3D shape matching as the most important part of 3D target recognition, mainly includes 3D shape matching methods based on global features and 3D shape matching methods based on local features. Although the 3D shape matching method based on global features is fast, the 3D shape matching method based on local features is more robust to occlusion and clutter, and can make subsequent pose estimation more accurate.
- the use of 3D local feature descriptors to describe the local features of the 3D point cloud is the core part of the whole method, and it is also the key factor that determines the accuracy of 3D shape matching or 3D target recognition.
- the key is how to establish a repeatable and robust local coordinate reference system for the local features of the 3D point cloud.
- 3D local feature descriptors In order to maintain the distinction and robustness of occlusion and clutter, many 3D local feature descriptors have been proposed and extensively studied. These 3D local feature descriptors can be classified into two categories, namely, local reference axis (LRA, Local Reference Axis)-based descriptors and local reference coordinate system (LRF, Local Reference Frame)-based descriptors.
- LRA Local Reference Axis
- LRF Local Reference Frame
- the 3D local feature descriptor with a local reference coordinate system can fully encode the spatial distribution and/or geometric information of the 3D local surface using three axes. It not only has rotation invariance but also greatly enhances the 3D local feature descriptor. The distinction.
- the local reference coordinate system can be divided into a local reference coordinate system based on Covariance Analysis (CA) and a local reference coordinate system based on Geometric Attribute (GA).
- CA Covariance Analysis
- GA Geometric Attribute
- a 3D shape matching method based on a local reference coordinate system includes:
- a local reference coordinate system is established for the first spherical neighborhood of the feature point p, wherein the origin of the first spherical neighborhood overlaps with the feature point p and has a support radius of R, and the local reference coordinate system
- the origin of and the feature point p overlap and have orthogonal normalized x-axis, y-axis, and z-axis;
- establishing a local reference coordinate system for the first spherical neighborhood of the feature point includes:
- the parameter W i in the feature transformation is determined by at least one of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i , wherein the first parameter w1 i and the 3D point p i to the
- the distance of the feature point p is associated with the second parameter w2 i is associated with the distance from the 3D point p i to the projection point p′ i
- the cross product of the z-axis and the x-axis is determined as the y-axis of the local reference coordinate system.
- the determination of the z-axis of the local reference coordinate system implemented by the processor includes:
- n j is the normal vector of the 3D point q j.
- the determination of the calculation radius R z implemented by the processor includes:
- the feature transformation parameters W i by the first parameter w1 i, the second parameter w2 i, a product of the third parameter w3 i jointly determined.
- the 3D point cloud of the real scene may be acquired in real time, and the 3D point cloud of the target object may be pre-stored. That is to say, in the above method, the 3D partial surface information of the 3D point cloud obtained by real-time measurement of the real scene can be matched with the 3D partial surface information obtained by calculating the pre-stored 3D point cloud of the target object. In order to realize the recognition of the shape matching the target object model from the 3D point cloud of the real scene.
- a 3D shape matching method based on a local reference coordinate system is proposed, which is similar to the above method steps, except that the 3D point cloud of the target object is pre-stored and the scene
- the 3D point cloud can also be pre-stored after it is obtained. That is, in this method, the 3D partial surface information obtained by calculating the 3D point cloud of the target object stored in advance can be matched with the 3D partial surface information obtained by calculating the 3D point cloud of the scene. In order to realize the recognition of the shape matching the target object model from the 3D point cloud of the scene.
- a 3D shape matching device based on a local reference coordinate system
- the acquisition device is configured to acquire a 3D point cloud of a real scene
- the memory stores a computer program, and when the processor executes the computer program, other operations other than acquiring a 3D point cloud of a real scene are implemented in the method described in the first aspect of the present application.
- a 3D shape matching device based on a local reference coordinate system which includes a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program when the computer program is executed.
- the 3D shape matching method and device based on the local reference coordinate system proposed in this application makes the established local reference coordinate system repeatable and robust by performing feature transformation on the neighborhood points in the neighborhood of the feature points of the 3D point cloud. It is great and anti-noise, and by configuring the calculation radius used to calculate the z-axis of the local reference coordinate system to be adaptively adjusted according to the grid resolution, the established local reference coordinate system is hardly affected by the grid. The impact of resolution. Therefore, even if there is occlusion, clutter, and noise interference, or mesh simplification of the 3D point cloud of the scene or the target object, the 3D shape matching method and device based on the local reference coordinate system proposed in this application can still achieve better results. The 3D shape matching or recognition result.
- FIG. 1 is a schematic flowchart of a 3D shape matching method based on a local reference coordinate system according to an embodiment of the present application.
- Fig. 2 is a schematic flowchart of establishing a local reference coordinate system according to an embodiment of the present application.
- Fig. 3 is a schematic diagram of projecting a 3D point set P in a spherical neighborhood to a plane L orthogonal to the z-axis according to an embodiment of the present application.
- Fig. 4 is a schematic diagram of a circle of domain points of 3D points according to an embodiment of the present application.
- Fig. 5 is a schematic diagram of a process of determining the z-axis of the local reference coordinate system according to an embodiment of the present application.
- Fig. 6 is a schematic flow chart of determining the calculation radius R z of the z-axis according to an embodiment of the present application.
- Fig. 7 is a schematic structural diagram of a 3D shape matching device based on a local reference coordinate system according to an embodiment of the present application.
- 3D point cloud is to record the surface of the scene or object in the form of points after scanning the scene or object, and each point contains three-dimensional coordinates.
- 3D shape matching is to match a scene or object surface represented by 3D point data with another or more scenes or object surfaces represented by 3D point data, in order to further achieve the result of 3D target recognition.
- the present application proposes a 3D shape matching method based on a local reference coordinate system, and the method may include:
- a local reference coordinate system is established for the first spherical neighborhood of the feature point p, wherein the origin of the first spherical neighborhood overlaps with the feature point p and has a support radius of R, and the local reference coordinate system
- the origin of and the feature point p overlap and have orthogonal normalized x-axis, y-axis, and z-axis;
- the 3D local surface information in the first spherical neighborhood is matched with the 3D local surface information of the target object to perform 3D shape matching.
- the real scene can be any scene in real life, especially in industrial applications.
- This application does not specifically limit the application scene, as long as it is a scene that requires 3D shape matching or 3D recognition methods.
- the 3D point cloud may be acquired in real time, and the 3D point cloud of the target object may be pre-stored, that is, the target object may be a model used to match the same object in a real scene . That is, in this embodiment, the 3D partial surface information of the 3D point cloud obtained by real-time measurement of the real scene can be matched with the 3D partial surface information obtained by calculating the pre-stored 3D point cloud of the target object. , In order to realize the recognition of the shape matching the target object model from the 3D point cloud of the real scene.
- the feature points are also called key points or interest points, that is, prominent shape feature points.
- the method based on fixed-scale (Fixed-Scale) and the method based on adaptive-scale (Adaptive-Scale) can be used to obtain the feature points in the 3D point cloud, or any other existing technology can be used to obtain the feature points. This is not limited.
- the 3D local feature descriptor may be any local feature descriptor established based on the local reference coordinate system of the present application, for example, any existing local feature descriptor based on the GA method, which is not limited in this application .
- the method includes the basic technical features of the above-mentioned embodiment, and on the basis of the above-mentioned embodiment, as shown in FIG. 2, in order to make the established local reference coordinate system repeatable and robust Yes, the establishing a local reference coordinate system for the first spherical neighborhood of the feature point may include:
- the parameter W i in the feature transformation is determined by at least one of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i , wherein the first parameter w1 i and the 3D point p i to the
- the distance of the feature point p is associated with the second parameter w2 i is associated with the distance from the 3D point p i to the projection point p′ i
- the cross product of the z-axis and the x-axis is determined as the y-axis of the local reference coordinate system.
- the x-axis of the local reference coordinate axis should be the coordinate axis that makes the point set P′ more stable in the x-axis direction. Therefore, the local reference coordinate axis obtained by the above method is more robust Sex.
- a point distribution T with a larger variance direction than the projection point set P′ is obtained by performing planar projection and feature transformation on the neighborhood points in the feature point neighborhood of the 3D point cloud.
- the local reference coordinate system established by analyzing the point distribution T in the direction of large variance is repeatable, robust and anti-noise.
- the 3D point p i used to the distance of said first parameter associated feature point p w1 i can reduce the effects of occlusion and clutter projection set point P '
- the 3D point p i used to The second parameter w2 i associated with the distance of the projection point p′ i can make the point distribution of the projection point set P′ more characteristic, using the average distance from the 3D point p i to one of its neighboring points
- the associated third parameter w3 i can reduce the influence of outliers on the projection point set P′.
- the first parameter w1 i and the distance from the 3D point p i to the characteristic point p need to satisfy the following relationship:
- the second parameter w2 i and the distance from the 3D point p i to the projection point p′ i need to satisfy the following relationship:
- the third parameter w3 i and the average distance from the 3D point p i to one of its neighboring points Need to satisfy the following relationship:
- a certain 3D point p i has r neighborhood points p i1 , p i2 , ... p ir in the neighborhood of a ring.
- the number r of the neighboring points of the ring may be 5, that is, a certain 3D point p i has r neighboring points p i1 and p i2 in its neighboring ring of the ring. , P i3 , p i4 , p i5 .
- the constant s may be 4.
- the parameter W i in the feature transformation may be jointly determined by the product of any two of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i.
- the parameter W i in the feature transformation may be jointly determined by the product of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i.
- the more factors used to determine the point distribution T with a larger variance direction the better the technical effect, and the more robust the obtained local reference coordinate system.
- the method includes the basic technical features of the foregoing embodiment, and on the basis of the foregoing embodiment, as shown in FIG. 5, the determining the z-axis of the local reference coordinate system may include:
- n j is the normal vector of the 3D point q j.
- the calculation radius R z may not be equal to the support radius R, so that the z-axis of the local reference coordinate system is more robust to occlusion and clutter.
- the calculated radius R z is one third of the support radius R.
- this application proposes to use an adaptive scale factor to determine the calculated radius R z , so that the obtained z-axis is not only robust to occlusion, but also robust to different grid sampling.
- the method includes the basic technical features of the foregoing embodiment, and on the basis of the foregoing embodiment, as shown in FIG. 6, determining the calculation radius R z may include:
- the calculation radius used to calculate the z-axis of the local reference coordinate system is configured to be adaptively adjusted according to the grid resolution, so that the established local reference coordinate system can hardly be affected by the grid. The impact of resolution.
- the constant C may be 3.
- the method includes the basic technical features of the foregoing embodiment, and on the basis of the foregoing embodiment, before determining the calculated radius R z of the real scene, the method may further include:
- Pre-determining at least two radius scale factors and pre-determining a local reference coordinate system and a 3D local feature descriptor corresponding to the at least two radius scale factors;
- the predetermined at least two radius scale factors and the predetermined 3D local feature descriptor are stored in different positions of the hash table.
- the method includes the basic technical features of the foregoing embodiment, and on the basis of the foregoing embodiment, the method may further include:
- the scale factor is the 3D local feature descriptor corresponding to the radius scale factor closest to the radius scale factor ⁇ in the hash table as the final 3D local feature descriptor.
- the present application proposes a 3D shape matching method based on a local reference coordinate system, which may include:
- a local reference coordinate system is established for the first spherical neighborhood of the feature point p, wherein the origin of the first spherical neighborhood overlaps with the feature point p and has a support radius of R, and the local reference coordinate system
- the origin of and the feature point p overlap and have orthogonal normalized x-axis, y-axis, and z-axis;
- establishing a local reference coordinate system for the first spherical neighborhood of the feature point includes:
- the parameter W i in the feature transformation is determined by at least one of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i , wherein the first parameter w1 i and the 3D point p i to the
- the distance of the feature point p is associated with the second parameter w2 i is associated with the distance from the 3D point p i to the projection point p′ i
- the cross product of the z-axis and the x-axis is determined as the y-axis of the local reference coordinate system.
- the embodiment of the second aspect of the present application has similar steps to the embodiment of the first aspect, except that the 3D point cloud of the target object is pre-stored and the 3D point cloud of the scene can also be pre-stored after obtaining of. That is, in this method, the 3D partial surface information obtained by calculating the 3D point cloud of the target object stored in advance can be matched with the 3D partial surface information obtained by calculating the 3D point cloud of the scene. In order to realize the recognition of the shape matching the target object model from the 3D point cloud of the scene.
- the technical features in the specific embodiments of the first aspect of the present application which will not be repeated here.
- a 3D shape matching device based on a local reference coordinate system which may include a collection device, a memory, and a processor, wherein the collection The device is configured to acquire a 3D point cloud of a real scene, the memory stores a computer program, and when the processor executes the computer program, the method described in the first aspect of the application is implemented in addition to acquiring 3D point clouds of the real scene. Operations other than the point cloud.
- the collection device can be a 3D scanning device, a laser scanning device, a structured light collection device, or any other device that can obtain a 3D point cloud of a real scene
- the memory can be any storage device with a software storage function.
- the processor is any processor that can execute a computer program and instruct a certain execution subject to perform related operations.
- the 3D point cloud data acquired by the acquisition device may be directly or indirectly stored in the memory, or may be accessed by the memory or the processor.
- the processor may directly or indirectly control the acquisition device to obtain the 3D point cloud data.
- a 3D shape matching device based on a local reference coordinate system which includes a memory and a processor, wherein the memory stores a computer program, and the processor executes The computer program implements the method embodiment described in the first aspect or the second aspect of the present application.
Abstract
一种基于局部参考坐标系的3D形状匹配方法及装置,其中方法包括:在获取3D点云及特征点之后,将特征点集投影至一平面,并利用3D点到特征点的距离、3D点到投影点的距离、3D点到其一环邻域点的平均距离中的至少一个因子来对投影点做特征变换以获得比投影点集具有更大方差方向的点分布,并基于进行特征变换后的点分布来确定局部参考坐标系;基于这种局部参考坐标系而建立的3D局部特征描述符可以更鲁棒的对3D局部表面信息进行编码,从而获得更好的3D形状匹配效果。
Description
本申请属于3D形状匹配领域,尤其涉及一种基于局部参考坐标系的3D形状匹配方法及装置。
发明背景
随着三维扫描建模和三维重建技术的不断发展,3D目标识别已经成为计算机视觉领域的一个研究热点,并在智能监控、电子商务、机器人、生物医学等方面有着广泛地应用。3D形状匹配,作为3D目标识别中最为重要的环节,主要包括基于全局特征的3D形状匹配方法和基于局部特征的3D形状匹配方法。虽然基于全局特征的3D形状匹配方法速度快,但是基于局部特征的3D形状匹配方法对遮挡和杂乱更加鲁棒,并且能够使后续的姿态估计更为精确。在基于局部特征的3D形状匹配方法中,使用3D局部特征描述符对3D点云的局部特征进行描述是整个方法的核心部分,也是决定3D形状匹配或3D目标识别的精度的关键因素。为了建立精准、鲁棒的3D局部特征描述符,关键在于如何为3D点云的局部特征建立可重复且鲁棒的局部坐标参考系。
为了保持对遮挡和杂乱的区分性和鲁棒性,很多3D局部特征描述符已经被提出来并进行了广泛的研究。这些3D局部特征描述符可以被分类为两大类,即基于局部参考轴(LRA,Local Reference Axis)的描述符和基于局部参考坐标系(LRF,Local Reference Frame)的描述符。其中局部参考坐标系由三个正交轴构成,而局部参考轴仅包含单个定向轴。仅仅定义单个定向轴的局部参考轴只能提供径向和仰角方向的信息,这会导致3D局部特征描述符缺乏足够的细节信息。相反,具有局部参考坐标系的3D局部特征描述符利用三个轴可以充分地编码3D局部表面的空间分布和/或几何信息,其不仅具有旋转不变性而且还极大地增强了3D局部特征描述符的区分性。
目前,局部参考坐标系可以分为基于协方差分析(CA,Covariance Analysis)的局部参考坐标系和基于几何属性(GA,Geometric Attribute)的局部参考坐标系。然而,由于通过采集设备获取3D点云数据时不可避免地会存在一些噪声干扰、复杂场景下多个目标会存在遮挡和杂乱以及3D传感器与目标间距离的变化会导致点云分辨率发生改变等原因,目前大部分基于协方差分析的局部参考坐标系通常存在很低的可重复性和方向分歧(sign ambiguity)问题,而基于几何属性的局部参考坐标系容易受严重噪声和网格分辨率的影响,因此建立可重复、鲁棒、抗噪声以及不受网格简化影响的局部参考坐标系仍然是一个难题。
发明内容
为了解决以上技术问题,本申请提出了以下技术方案。
根据本申请的第一方面,提出了一种基于局部参考坐标系的3D形状匹配方法,所述方法包括:
获取现实场景的3D点云;
获取所述现实场景的3D点云的特征点p;
为所述特征点p的第一球形邻域建立局部参考坐标系,其中所述第一球形邻域的原点与所述特征点p重合并具有大小为R的支撑半径,所述局部参考坐标系的原点与所述特征点p重合并具有正交归一化的x轴、y轴和z轴;
基于所述局部参考坐标系,建立3D局部特征描述符并对所述第一球形邻域内的空间信息进行编码,以获得所述第一球形邻域内的3D局部表面信息;以及
将所述第一球形邻域内的3D局部表面信息与目标物体的3D局部表面信息进行匹配,以进行3D形状匹配;
其中,所述为所述特征点的第一球形邻域建立局部参考坐标系包括:
确定所述局部参考坐标系的z轴;
将所述第一球形邻域内的3D点集P投影至与所述z轴正交的平面L,得到投影点集P′,其中P={p
1,p
2,p
3,……,p
n},P′={p′
1,p′
2,p′
3,……,p′
n},n为所述第一球形邻域内的3D点的数量,其中所述平面L为z=0处的平面;
对投影点集P′进行如下式所示的特征变换,以获得比投影点集P′具有更大方差方向的点分布T:
T
i=W
i(p′
i-p)+p,
其中所述特征变换中的参数W
i由第一参数w1
i、第二参数w2
i、第三参数w3
i中的至少一个确定,其中所述第一参数w1
i与3D点p
i到所述特征点p的距离相关联,所述第二参数w2
i与3D点p
i到所述投影点p′
i的距离相关联,所述第三参数w3
i与3D点p
i到其一环邻域点的平均距离
相关联,其中所述一环邻域点为与所述3D点p
i相邻的邻域点,其中T={T
i},i=1,2,3,……,n;
对所述点分布T的如下式所示的协方差矩阵cov(T)进行特征值分解以确定所述协方差矩阵cov(T)的最大特征值所对应的特征向量v′:
并根据如下定义对所述最大特征值所对应的特征向量v′进行符号消歧以确定所述局部参考坐标轴的x轴:
将所述z轴和所述x轴的叉积确定为所述局部参考坐标系的y轴。
在一实施例中,由所述处理器实施的确定所述局部参考坐标系的z轴包括:
获取第二球形邻域内的3D点集P
z,其中所述第二球形邻域的原点与所述特征点p重合并具有大小为R
z的计算半径,其中P
z={q
1,q
2,q
3,……,q
m},m为所述第二球形邻域内的3D点的数量;
对所述3D点集P
z的如下式所示的协方差矩阵cov(P
z)进行特征值分解以确定所述协方差矩阵cov(P
z)的最小特征值所对应的特征向量v:
根据如下定义对所述最小特征值所对应的特征向量v进行符号消歧以确定所述局部参考坐标轴的z轴:
其中n
j为3D点q
j的法向量。
在一实施例中,由所述处理器实施的确定所述计算半径R
z包括:
获取所述现实场景的平均网格分辨率scene.mr和所述目标物体的平均网格分辨率model.mr;
根据所述现实场景的平均网格分辨率scene.mr和所述目标物体的平均网格分辨率model.mr确定半径比例因子δ,其中所述半径比例因子通过如下来确定:
其中C为常数;
将所述计算半径R
z确定为R
z=δR。
在一实施例中,所述特征变换中的参数W
i由所述第一参数w1
i、所述第二参数w2
i、所述第三参数w3
i中的任意两个的乘积共同确定。
在一实施例中,所述特征变换中的参数W
i由所述第一参数w1
i、所述第二参数w2
i、所述第三参数w3
i的乘积共同确定。
在上述方法中,所述现实场景的3D点云可以被实时地获取,所述目标物体的3D点云可以是预先存储的。也就是说,在上述方法中,可以将对现实场景进行实时测量所得到的3D点云的3D局部表面信息与通过计算预先存储的目标物体的3D点云所获得的3D局部表面信息进行匹配,以实现从现实场景的3D点云中识别出与目标物体模型相匹配的形状。
根据本申请的第二方面,提出了一种基于局部参考坐标系的3D形状匹配方法,其与上述方法步骤类似,不同之处在于所述目标物体的3D点云是预先存储的并且所述场景的3D点 云也可以是获得之后预先存储的。也就是说,在此方法中,可以将通过计算预先存储的目标物体的3D点云所获得的3D局部表面信息与通过计算所述场景的3D点云所获得的3D局部表面信息与进行匹配,以实现从所述场景的3D点云中识别出与目标物体模型相匹配的形状。
根据本申请的第三方面,提出了一种基于局部参考坐标系的3D形状匹配装置,其包括采集设备、存储器和处理器,其中所述采集设备被配置为获取现实场景的3D点云,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实施本申请上述第一方面所述的方法中除了获取现实场景的3D点云之外的其它操作。
根据本申请的第四方面,提出了一种基于局部参考坐标系的3D形状匹配装置,其包括存储器和处理器,其中所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实施本申请上述第一方面或者第二方面所述的方法。
以上仅是对本申请的概述,其不能作为衡量本申请相对现有技术做出贡献的依据,具体需参见本申请具体实施例部分的描述。
本申请所提出的基于局部参考坐标系的3D形状匹配方法及装置,通过对3D点云的特征点邻域内的邻域点进行特征变换来使得所建立的局部参考坐标系是可重复的、鲁棒的以及抗噪声的,并通过将用于计算所述局部参考坐标系的z轴的计算半径配置为根据网格分辨率进行自适应调整来使得所建立的局部参考坐标系几乎不受网格分辨率的影响。因此,即使存在遮挡、杂乱和噪声干扰,或者是对场景或目标物体的3D点云进行网格简化,使用本申请所提出的基于局部参考坐标系的3D形状匹配方法及装置仍然可以获得较好的3D形状匹配或识别结果。
图1为根据本申请一实施例的一种基于局部参考坐标系的3D形状匹配方法的流程示意图。
图2为根据本申请一实施例建立局部参考坐标系的流程示意图。
图3为根据本申请一实施例将球形邻域内的3D点集P投影至与所述z轴正交的平面L的示意图。
图4为根据本申请一实施例的3D点的一环领域点的示意图。
图5为根据本申请一实施例确定所述局部参考坐标系的z轴的流程示意图。
图6为根据本申请一实施例确定所述z轴的的计算半径R
z的流程示意图。
图7为根据本申请一实施例的一种基于局部参考坐标系的3D形状匹配装置的结构示意图。
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及具体实施例对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
除非另有定义,本说明书所使用的所有技术和科学术语与属于本申请技术领域的技术人员通常理解的含义相同。在本申请的说明书中所使用的术语只是为了描述具体的实施例,而不是用于限制本发明。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意组合和所有组合。
此外,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示技术特征的数量或相对重要性。以下对本申请的各个具体实施例进行描述,且所描述的不同实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
众所周知,3D点云是扫描场景或物体之后,以点的形式记录场景或物体的表面,且每个点包含有三维坐标。3D形状匹配,即是将用3D点数据表示的场景或物体表面与另一或多个用3D点数据表示场景或物体表面进行匹配,以期进一步达到3D目标识别的结果。
根据本申请的第一方面,在一实施例中,如图1所示,本申请提出了一种基于局部参考坐标系的3D形状匹配方法,所述方法可以包括:
获取现实场景的3D点云;
获取所述现实场景的3D点云的特征点p;
为所述特征点p的第一球形邻域建立局部参考坐标系,其中所述第一球形邻域的原点与所述特征点p重合并具有大小为R的支撑半径,所述局部参考坐标系的原点与所述特征点p重合并具有正交归一化的x轴、y轴和z轴;
基于所述局部参考坐标系,建立3D局部特征描述符并对所述第一球形邻域内的空间信息进行编码,以获得所述第一球形邻域内的3D局部表面信息;以及
将所述第一球形邻域内的3D局部表面信息与目标物体的3D局部表面信息进行匹配,以进行3D形状匹配。
在本实施例中,所述现实场景可以是现实生活中,尤其是工业应用中的任何场景,本申请不对该应用场景作出具体限制,只要是需要用到3D形状匹配或3D识别方法的场景都可以。在本实施例中,所述3D点云可以是被实时地获取,所述目标物体的3D点云可以是预先存储的,即所述目标物体可以是用于在现实场景中匹配同样物体的模型。也就是说,在本实施例中,可以将对现实场景进行实时测量所得到的3D点云的3D局部表面信息与通过计算预先存储的目标物体的3D点云所获得的3D局部表面信息进行匹配,以实现从现实场景的3D点云中识别出与目标物体模型相匹配的形状。
在本实施例中,所述特征点又称为关键点或兴趣点,也就是突出的形状特征点。可以使用基于固定尺度(Fixed-Scale)的方法和基于自适应尺度(Adaptive-Scale)的方法来获取3D点云中的特征点,也可以使用任何其它现有技术来获取特征点,本申请对此不作限定。
在本实施例中,3D局部特征描述符可以是基于本申请的局部参考坐标系所建立的任何局部特征描述符,例如,基于GA方法的任何现有局部特征描述符,本申请对此不作限定。
在一实施例中,所述方法包括上述实施例的基本技术特征,并在上述实施例的基础之上,如图2所示,为了使所建立的局部参考坐标系是可重复的且鲁棒的,所述为所述特征点的第一球形邻域建立局部参考坐标系可以包括:
确定所述局部参考坐标系的z轴;
将所述第一球形邻域内的3D点集P投影至与所述z轴正交的平面L,如图3所示,得到投影点集P′,其中P={p
1,p
2,p
3,……,p
n},P′={p′
1,p′
2,p′
3,……,p′
n},n为所述第一球形邻域内的3D点的数量,其中所述平面L为z=0处的平面;
对投影点集P′进行如下式所示的特征变换,以获得比投影点集P′具有更大方差方向的点分布T:
T
i=W
i(p′
i-p)+p,
其中所述特征变换中的参数W
i由第一参数w1
i、第二参数w2
i、第三参数w3
i中的至少一个确定,其中所述第一参数w1
i与3D点p
i到所述特征点p的距离相关联,所述第二参数w2
i与3D点p
i到所述投影点p′
i的距离相关联,所述第三参数w3
i与3D点p
i到其一环邻域点的平均距离
相关联,其中所述一环邻域点为与所述3D点p
i相邻的邻域点,其中T={T
i},i=1,2,3,……,n;
对所述点分布T的如下式所示的协方差矩阵cov(T)进行特征值分解以确定所述协方差矩阵cov(T)的最大特征值所对应的特征向量v′:
并根据如下定义对所述最大特征值所对应的特征向量v′进行符号消歧以确定所述局部参考坐标轴的x轴:
将所述z轴和所述x轴的叉积确定为所述局部参考坐标系的y轴。
值得说明的是,若点集P′在某方向的方差越大,则说明点集P′整体在该方向上更加稳定。而所述局部参考坐标轴的x轴应是使所述点集P′整体在所述x轴方向上更加稳定的坐标轴,因此使用上述方法所获得的局部参考坐标轴具有更好的鲁棒性。
在本实施例中,通过对3D点云的特征点邻域内的邻域点进行平面投影以及特征变换来获得比投影点集P′具有更大方差方向的点分布T,通过对所述具有更大方差方向的点分布T进行分析而建立的局部参考坐标系是可重复的、鲁棒的以及抗噪声的。
在本实施例中,使用与3D点p
i到所述特征点p的距离相关联的述第一参数w1
i可以减少遮挡和杂乱对投影点集P′的影响,使用与3D点p
i到所述投影点p′
i的距离相关联的第二参数w2
i可以使投影点集P′的点分布更具有特征,使用与3D点p
i到其一环邻域点的平均距离
相 关联的第三参数w3
i可以减少离群点对投影点集P′的影响。
作为一优选实施例,所述第一参数w1
i与所述3D点p
i到所述特征点p的距离需要满足如下关系式:
w1
i=R-‖p
i-p‖。
作为一优选实施例,所述第二参数w2
i与所述3D点p
i到所述投影点p′
i的距离需要满足如下关系式:
h
i=‖p
i-p′
i‖=|(p
i-p)·z|
其中H={h
i},σ表示上述高斯函数的标准差。
作为一优选实施例,所述标准差σ可以为:σ=max(H)/9。
其中r为所述一环邻域点的数量,s为常数。
作为示例,某一3D点p
i在其一环邻域有r个邻域点p
i1、p
i2、……p
ir。如图4所示,作为一优选实施例,所述一环邻域点的数量r可以为5,即某一3D点p
i在其一环邻域有r个邻域点p
i1、p
i2、p
i3、p
i4、p
i5。
作为一优选实施例,所述常数s可以为4。
作为一优选实施例,所述特征变换中的参数W
i可以由所述第一参数w1
i、所述第二参数w2
i、所述第三参数w3
i中的任意两个的乘积共同确定。例如,具有更大方差方向的点分布T可以具有如下多种形式:T
i=w1
iw2
i(p′
i-p)+p、T
i=w1
iw3
i(p′
i-p)+p或T
i=w2
iw3
i(p′
i-p)+p。
作为一优选实施例,所述特征变换中的参数W
i可以由所述第一参数w1
i、所述第二参数w2
i、所述第三参数w3
i的乘积共同确定。例如,具有更大方差方向的点分布T可以为:T
i=w1
iw2
iw3
i(p′
i-p)+p。
在上述优选实施例中,用于确定具有更大方差方向的点分布T的因子越多,技术效果越好,所得到的的局部参考坐标系越鲁棒。
在一实施例中,所述方法包括上述实施例的基本技术特征,并在上述实施例的基础之上,如图5所示,所述确定所述局部参考坐标系的z轴可以包括:
获取第二球形邻域内的3D点集P
z,其中所述第二球形邻域的原点与所述特征点p重合并具有大小为R
z的计算半径,其中P
z={q
1,q
2,q
3,……,q
m},m为所述第二球形邻域内的3D点的数量;
对所述3D点集P
z的如下式所示的协方差矩阵cov(P
z)进行特征值分解以确定所述协方差矩阵cov(P
z)的最小特征值所对应的特征向量v:
根据如下定义对所述最小特征值所对应的特征向量v进行符号消歧以确定所述局部参考坐标轴的z轴:
其中n
j为3D点q
j的法向量。
作为一优选实施例,所述计算半径R
z可以不等于所述支撑半径R,从而使得所述局部参考坐标系的z轴对遮挡和杂乱具有更好的鲁棒性。
作为一优选实施例,所述计算半径R
z为所述支撑半径R的三分之一。
由于在实际获取3D点云时,不同的3D网格分辨率大小会致使所获得的3D点云具有不同的密度,往往网格分辨率越大3D点云规模越大,在同样大小的空间中表示场景或物体表面的3D点数目就越多。并且,当物体模型的网格分辨率低于场景时,相同半径内现实场景中获取的邻域点会比模型的邻域点少,而当点非常稀疏时,如果使用较小的邻域半径来计算场景局部参考坐标系的z轴,将会导致3D形状匹配的性能受到极大的负面影响从而变得很差。因此,本申请提出了使用自适应的尺度因子来确定计算半径R
z,从而使得所得到的z轴不仅对遮挡很鲁棒,而且对不同的网格采样很鲁棒。在一实施例中,所述方法包括上述实施例的基本技术特征,并在上述实施例的基础之上,如图6所示,确定所述计算半径R
z可以包括:
获取所述现实场景的平均网格分辨率scene.mr和所述目标物体的平均网格分辨率model.mr;
根据所述现实场景的平均网格分辨率scene.mr和所述目标物体的平均网格分辨率model.mr确定半径比例因子δ,其中所述半径比例因子通过如下来确定:
其中C为常数;
将所述计算半径R
z确定为R
z=δR。
在本实施例中,通过将用于计算所述局部参考坐标系的z轴的计算半径配置为根据网格分辨率进行自适应调整,从而使得所建立的局部参考坐标系可以几乎不受网格分辨率的影响。
作为一优选实施例,所述常数C可以为3。
在一实施例中,所述方法包括上述实施例的基本技术特征,并在上述实施例的基础之上,在确定所述现实场景的计算半径R
z之前,所述方法还可以包括:
预确定至少两个半径比例因子,并预确定与所述至少两个半径比例因子相对应的局部参考坐标系和3D局部特征描述符;
将所述预确定的至少两个半径比例因子和所述预确定的3D局部特征描述符存储在哈希表的不同位置处。
在一实施例中,所述方法包括上述实施例的基本技术特征,并在上述实施例的基础之上,所述方法还可以包括:
使用根据所述现实场景的平均网格分辨率scene.mr和所述目标物体的平均网格分辨率model.mr所确定的半径比例因子δ在所述哈希表中查找所述至少两个半径比例因子,将所述哈希表中与所述半径比例因子δ最接近的半径比例因子所对应的3D局部特征描述符作为最终的3D局部特征描述符。
根据本申请的第二方面,在一实施例中,本申请提出了一种基于局部参考坐标系的3D形状匹配方法,其可以包括:
获取目标物体的3D点云;
获取所述目标物体的3D点云的特征点p;
为所述特征点p的第一球形邻域建立局部参考坐标系,其中所述第一球形邻域的原点与所述特征点p重合并具有大小为R的支撑半径,所述局部参考坐标系的原点与所述特征点p重合并具有正交归一化的x轴、y轴和z轴;
基于所述局部参考坐标系,建立3D局部特征描述符并对所述第一球形邻域内的空间信息进行编码,以获得所述第一球形邻域内的3D局部表面信息;以及
将所述第一球形邻域内的3D局部表面信息与场景的3D局部表面信息进行匹配,以进行3D形状匹配;
其中,所述为所述特征点的第一球形邻域建立局部参考坐标系包括:
确定所述局部参考坐标系的z轴;
将所述第一球形邻域内的3D点集P投影至与所述z轴正交的平面L,得到投影点集P′,其中P={p
1,p
2,p
3,……,p
n},P′={p′
1,p′
2,p′
3,……,p′
n},n为所述第一球形邻域内的3D点的数量,其中所述平面L为z=0处的平面;
对投影点集P′进行如下式所示的特征变换,以获得比投影点集P′具有更大方差方向的点分布T:
T
i=W
i(p′
i-p)+p,
其中所述特征变换中的参数W
i由第一参数w1
i、第二参数w2
i、第三参数w3
i中的至少一个确定,其中所述第一参数w1
i与3D点p
i到所述特征点p的距离相关联,所述第二参数w2
i与3D点p
i到所述投影点p′
i的距离相关联,所述第三参数w3
i与3D点p
i到其一环邻域点的平均距离
相关联,其中所述一环邻域点为与所述3D点p
i相邻的邻域点,其中T={T
i},i=1,2,3,……,n;
对所述点分布T的如下式所示的协方差矩阵cov(T)进行特征值分解以确定所述协方差矩阵cov(T)的最大特征值所对应的特征向量v′:
并根据如下定义对所述最大特征值所对应的特征向量v′进行符号消歧以确定所述局部参考坐标轴的x轴:
将所述z轴和所述x轴的叉积确定为所述局部参考坐标系的y轴。
本申请第二方面的实施例与上述第一方面的实施例步骤类似,不同之处在于所述目标物体的3D点云是预先存储的并且所述场景的3D点云也可以是获得之后预先存储的。也就是说,在此方法中,可以将通过计算预先存储的目标物体的3D点云所获得的3D局部表面信息与通过计算所述场景的3D点云所获得的3D局部表面信息与进行匹配,以实现从所述场景的3D点云中识别出与目标物体模型相匹配的形状。本申请第二方面的其它技术特征均可以参见本申请第一方面的具体实施例中的技术特征,本文在此不再累赘陈述。
根据本申请的第三方面,在一实施例中,如图7所示,提出了一种基于局部参考坐标系的3D形状匹配装置,其可以包括采集设备、存储器和处理器,其中所述采集设备被配置为获取现实场景的3D点云,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实施本申请上述第一方面所述的方法实施例中除了获取现实场景的3D点云之外的其它操作。在本实施例中,所述采集设备可以是3D扫描设备、激光扫描设备、结构光采集设备或任何其它可以获得现实场景3D点云的设备,所述存储器可以是任何具有软件存储功能的存储设备,所述处理器是任何可以执行计算机程序并命令某一执行主体执行相关操作的处理器。在一实施例中,所述采集设备所获取的3D点云数据可以直接或间接存储在所述存储器中,或者可以由所述存储器或者处理器访问。在一实施例中,所述处理器可以直接或间接控制所述采集设备获取所述3D点云数据。本申请第三方面的其它技术特征均可以参见本申请第一方面的具体实施例中的技术特征,本文在此不再累赘陈述。
根据本申请的第四方面,在一实施例中,提出了一种基于局部参考坐标系的3D形状匹配装置,其包括存储器和处理器,其中所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实施本申请上述第一方面或者第二方面所述的方法实施例。本申请第四方面的其它技术特征均可以参见本申请第一、第二或第三方面的具体实施例中的技术特征,本文在此不再累赘陈述。
以上所述的本申请的具体实施例,并不构成对本申请保护范围的限定。任何在本申请的 原理之内所作的修改、等同替换和改进等,均应包含在本申请的保护范围之内。
Claims (20)
- 一种基于局部参考坐标系的3D形状匹配方法,其特征在于,所述方法包括:获取现实场景的3D点云;获取所述现实场景的3D点云的特征点p;为所述特征点p的第一球形邻域建立局部参考坐标系,其中所述第一球形邻域的原点与所述特征点p重合并具有大小为R的支撑半径,所述局部参考坐标系的原点与所述特征点p重合并具有正交归一化的x轴、y轴和z轴;基于所述局部参考坐标系,建立3D局部特征描述符并对所述第一球形邻域内的空间信息进行编码,以获得所述第一球形邻域内的3D局部表面信息;以及将所述第一球形邻域内的3D局部表面信息与目标物体的3D局部表面信息进行匹配,以进行3D形状匹配;其中,所述为所述特征点的第一球形邻域建立局部参考坐标系包括:确定所述局部参考坐标系的z轴;将所述第一球形邻域内的3D点集P投影至与所述z轴正交的平面L,得到投影点集P′,其中P={p 1,p 2,p 3,……,p n},P′={p′ 1,p′ 2,p′ 3,……,p′ n},n为所述第一球形邻域内的3D点的数量,其中所述平面L为z=0处的平面;对投影点集P′进行如下式所示的特征变换,以获得比投影点集P′具有更大方差方向的点分布T:T i=W i(p′ i-p)+p,其中所述特征变换中的参数W i由第一参数w1 i、第二参数w2 i、第三参数w3 i中的至少一个确定,其中所述第一参数w1 i与3D点p i到所述特征点p的距离相关联,所述第二参数w2 i与3D点p i到所述投影点p′ i的距离相关联,所述第三参数w3 i与3D点p i到其一环邻域点的平均距离 相关联,其中所述一环邻域点为与所述3D点p i相邻的邻域点,其中T={T i},i=1,2,3,……,n;对所述点分布T的如下式所示的协方差矩阵cov(T)进行特征值分解以确定所述协方差矩阵cov(T)的最大特征值所对应的特征向量v′:并根据如下定义对所述最大特征值所对应的特征向量v′进行符号消歧以确定所述局部参考坐标轴的x轴:将所述z轴和所述x轴的叉积确定为所述局部参考坐标系的y轴。
- 根据权利要求1所述的3D形状匹配方法,其特征在于,所述确定所述局部参考坐标系的z轴包括:获取第二球形邻域内的3D点集P z,其中所述第二球形邻域的原点与所述特征点p重合并具有大小为R z的计算半径,其中P z={q 1,q 2,q 3,……,q m},m为所述第二球形邻域内的3D点的数量;对所述3D点集P z的如下式所示的协方差矩阵cov(P z)进行特征值分解以确定所述协方差矩阵cov(P z)的最小特征值所对应的特征向量v:根据如下定义对所述最小特征值所对应的特征向量v进行符号消歧以确定所述局部参考坐标轴的z轴:其中n j为3D点q j的法向量。
- 根据权利要求2所述的3D形状匹配方法,其特征在于,所述计算半径R z不等于所述支撑半径R。
- 根据权利要求4所述的3D形状匹配方法,其特征在于,在确定所述现实场景的计算半径R z之前,所述方法还包括:预确定至少两个半径比例因子,并预确定与所述至少两个半径比例因子相对应的局部参考坐标系和3D局部特征描述符;将所述预确定的至少两个半径比例因子和所述预确定的3D局部特征描述符存储在哈希表的不同位置处。
- 根据权利要求5所述的3D形状匹配方法,其特征在于,所述方法还包括:使用根据所述现实场景的平均网格分辨率scene.mr和所述目标物体的平均网格分辨率 model.mr所确定的半径比例因子δ在所述哈希表中查找所述至少两个半径比例因子,将所述哈希表中与所述半径比例因子δ最接近的半径比例因子所对应的3D局部特征描述符作为最终的3D局部特征描述符。
- 根据权利要求1所述的3D形状匹配方法,其特征在于,所述特征变换中的参数W i由所述第一参数w1 i、所述第二参数w2 i、所述第三参数w3 i中的任意两个的乘积共同确定。
- 根据权利要求1所述的3D形状匹配方法,其特征在于,所述特征变换中的参数W i由所述第一参数w1 i、所述第二参数w2 i、所述第三参数w3 i的乘积共同确定。
- 根据权利要求1所述的3D形状匹配方法,其特征在于,所述第一参数w1 i与所述3D点p i到所述特征点p的距离满足如下关系式:w1 i=R-‖p i-p‖。
- 一种基于局部参考坐标系的3D形状匹配方法,其特征在于,所述方法包括:获取目标物体的3D点云;获取所述目标物体的3D点云的特征点p;为所述特征点p的第一球形邻域建立局部参考坐标系,其中所述第一球形邻域的原点与所述特征点p重合并具有大小为R的支撑半径,所述局部参考坐标系的原点与所述特征点p重合并具有正交归一化的x轴、y轴和z轴;基于所述局部参考坐标系,建立3D局部特征描述符并对所述第一球形邻域内的空间信息进行编码,以获得所述第一球形邻域内的3D局部表面信息;以及将所述第一球形邻域内的3D局部表面信息与场景的3D局部表面信息进行匹配,以进行3D形状匹配;其中,所述为所述特征点的第一球形邻域建立局部参考坐标系包括:确定所述局部参考坐标系的z轴;将所述第一球形邻域内的3D点集P投影至与所述z轴正交的平面L,得到投影点集P′,其中P={p 1,p 2,p 3,……,p n},P′={p′ 1,p′ 2,p′ 3,……,p′ n},n为所述第一球形邻域内的3D点的数量,其中所述平面L为z=0处的平面;对投影点集P′进行如下式所示的特征变换,以获得比投影点集P′具有更大方差方向的点分布T:T i=W i(p′ i-p)+p,其中所述特征变换中的参数W i由第一参数w1 i、第二参数w2 i、第三参数w3 i中的至少一个确定,其中所述第一参数w1 i与3D点p i到所述特征点p的距离相关联,所述第二参数w2 i与3D点p i到所述投影点p′ i的距离相关联,所述第三参数w3 i与3D点p i到其一环邻域点的平均距离 相关联,其中所述一环邻域点为与所述3D点p i相邻的邻域点,其中T={T i},i=1,2,3,……,n;对所述点分布T的如下式所示的协方差矩阵cov(T)进行特征值分解以确定所述协方差矩阵cov(T)的最大特征值所对应的特征向量v′:并根据如下定义对所述最大特征值所对应的特征向量v′进行符号消歧以确定所述局部参考坐标轴的x轴:将所述z轴和所述x轴的叉积确定为所述局部参考坐标系的y轴。
- 根据权利要求12所述的3D形状匹配方法,其特征在于,所述确定所述局部参考坐标系的z轴包括:获取第二球形邻域内的3D点集P z,其中所述第二球形邻域的原点与所述特征点p重合并具有大小为R z的计算半径,其中P z={q 1,q 2,q 3,……,q m},m为所述第二球形邻域内的3D点的数量;对所述3D点集P z的如下式所示的协方差矩阵cov(P z)进行特征值分解以确定所述协方差矩阵cov(P z)的最小特征值所对应的特征向量v:根据如下定义对所述最小特征值所对应的特征向量v进行符号消歧以确定所述局部参考坐标轴的z轴:其中n j为3D点q j的法向量。
- 根据权利要求12所述的3D形状匹配方法,其特征在于,所述特征变换中的参数W i由所述第一参数w1 i、所述第二参数w2 i、所述第三参数w3 i中的任意两个的乘积共同确定。
- 根据权利要求12所述的3D形状匹配方法,其特征在于,所述特征变换中的参数W i由所述第一参数w1 i、所述第二参数w2 i、所述第三参数w3 i的乘积共同确定。
- 一种基于局部参考坐标系的3D形状匹配装置,其特征在于,所述装置包括采集设备、存储器和处理器,其中所述采集设备被配置为获取现实场景的3D点云,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实施如下操作:获取所述现实场景的3D点云的特征点p;为所述特征点p的第一球形邻域建立局部参考坐标系,其中所述第一球形邻域的原点与所述特征点p重合并具有大小为R的支撑半径,所述局部参考坐标系的原点与所述特征点p重合并具有正交归一化的x轴、y轴和z轴;基于所述局部参考坐标系,建立3D局部特征描述符并对所述第一球形邻域内的空间信息进行编码,以获得所述第一球形邻域内的3D局部表面信息;以及将所述第一球形邻域内的3D局部表面信息与目标物体的3D局部表面信息进行匹配,以进行3D形状匹配;其中,所述为所述特征点的第一球形邻域建立局部参考坐标系包括:确定所述局部参考坐标系的z轴;将所述第一球形邻域内的3D点集P投影至与所述z轴正交的平面L,得到投影点集P′,其中P={p 1,p 2,p 3,……,p n},P′={p′ 1,p′ 2,p′ 3,……,p′ n},n为所述第一球形邻域内的3D点的数量,其中所述平面L为z=0处的平面;对投影点集P′进行如下式所示的特征变换,以获得比投影点集P′具有更大方差方向的点 分布T:T i=W i(p′ i-p)+p,其中所述特征变换中的参数W i由第一参数w1 i、第二参数w2 i、第三参数w3 i中的至少一个确定,其中所述第一参数w1 i与3D点p i到所述特征点p的距离相关联,所述第二参数w2 i与3D点p i到所述投影点p′ i的距离相关联,所述第三参数w3 i与3D点p i到其一环邻域点的平均距离 相关联,其中所述一环邻域点为与所述3D点p i相邻的邻域点,其中T={T i},i=1,2,3,……,n;对所述点分布T的如下式所示的协方差矩阵cov(T)进行特征值分解以确定所述协方差矩阵cov(T)的最大特征值所对应的特征向量v′:并根据如下定义对所述最大特征值所对应的特征向量v′进行符号消歧以确定所述局部参考坐标轴的x轴:将所述z轴和所述x轴的叉积确定为所述局部参考坐标系的y轴。
- 根据权利要求17所述的3D形状匹配装置,其特征在于,由所述处理器实施的确定所述局部参考坐标系的z轴包括:获取第二球形邻域内的3D点集P z,其中所述第二球形邻域的原点与所述特征点p重合并具有大小为R z的计算半径,其中P z={q 1,q 2,q 3,……,q m},m为所述第二球形邻域内的3D点的数量;对所述3D点集P z的如下式所示的协方差矩阵cov(P z)进行特征值分解以确定所述协方差矩阵cov(P z)的最小特征值所对应的特征向量v:根据如下定义对所述最小特征值所对应的特征向量v进行符号消歧以确定所述局部参考坐标轴的z轴:其中n j为3D点q j的法向量。
- 根据权利要求17所述的3D形状匹配装置,其特征在于,所述特征变换中的参数W i由所述第一参数w1 i、所述第二参数w2 i、所述第三参数w3 i的乘积共同确定。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980002893.4A CN113168729B (zh) | 2019-12-09 | 2019-12-09 | 一种基于局部参考坐标系的3d形状匹配方法及装置 |
PCT/CN2019/124037 WO2021114026A1 (zh) | 2019-12-09 | 2019-12-09 | 一种基于局部参考坐标系的3d形状匹配方法及装置 |
US17/042,417 US11625454B2 (en) | 2019-12-09 | 2019-12-09 | Method and device for 3D shape matching based on local reference frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/124037 WO2021114026A1 (zh) | 2019-12-09 | 2019-12-09 | 一种基于局部参考坐标系的3d形状匹配方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021114026A1 true WO2021114026A1 (zh) | 2021-06-17 |
Family
ID=76329287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/124037 WO2021114026A1 (zh) | 2019-12-09 | 2019-12-09 | 一种基于局部参考坐标系的3d形状匹配方法及装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11625454B2 (zh) |
CN (1) | CN113168729B (zh) |
WO (1) | WO2021114026A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113723917B (zh) * | 2021-08-24 | 2024-03-29 | 中国人民解放军32382部队 | 器械管理标准与器械技术标准关联构建方法及设备 |
US11810249B2 (en) * | 2022-01-03 | 2023-11-07 | Signetron Inc. | 3D point cloud processing |
CN115984803B (zh) * | 2023-03-10 | 2023-12-12 | 安徽蔚来智驾科技有限公司 | 数据处理方法、设备、驾驶设备和介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090092486A (ko) * | 2008-02-27 | 2009-09-01 | 성균관대학교산학협력단 | 3차원 회전대칭형 물체의 자가 모델링 방법 및 장치 |
CN105160344A (zh) * | 2015-06-18 | 2015-12-16 | 北京大学深圳研究生院 | 一种三维点云的局部特征提取方法及装置 |
CN106780459A (zh) * | 2016-12-12 | 2017-05-31 | 华中科技大学 | 一种三维点云数据自动配准方法 |
CN107274423A (zh) * | 2017-05-26 | 2017-10-20 | 中北大学 | 一种基于协方差矩阵和投影映射的点云特征曲线提取方法 |
CN109215129A (zh) * | 2017-07-05 | 2019-01-15 | 中国科学院沈阳自动化研究所 | 一种基于三维点云的局部特征描述方法 |
CN110211163A (zh) * | 2019-05-29 | 2019-09-06 | 西安财经学院 | 一种基于epfh特征的点云匹配算法 |
CN110335297A (zh) * | 2019-06-21 | 2019-10-15 | 华中科技大学 | 一种基于特征提取的点云配准方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794461B (zh) * | 2010-03-09 | 2011-12-14 | 深圳大学 | 一种三维建模方法及系统 |
US8274508B2 (en) * | 2011-02-14 | 2012-09-25 | Mitsubishi Electric Research Laboratories, Inc. | Method for representing objects with concentric ring signature descriptors for detecting 3D objects in range images |
CN103268631B (zh) * | 2013-05-23 | 2015-09-30 | 中国科学院深圳先进技术研究院 | 点云骨架提取方法及装置 |
US10169676B2 (en) * | 2016-02-24 | 2019-01-01 | Vangogh Imaging, Inc. | Shape-based registration for non-rigid objects with large holes |
US9922443B2 (en) * | 2016-04-29 | 2018-03-20 | Adobe Systems Incorporated | Texturing a three-dimensional scanned model with localized patch colors |
CN106096503A (zh) * | 2016-05-30 | 2016-11-09 | 东南大学 | 一种基于关键点和局部特征的三维人脸识别方法 |
US10186049B1 (en) * | 2017-03-06 | 2019-01-22 | URC Ventures, Inc. | Determining changes in object structure over time using mobile device images |
EP3457357B1 (en) * | 2017-09-13 | 2021-07-07 | Tata Consultancy Services Limited | Methods and systems for surface fitting based change detection in 3d point-cloud |
-
2019
- 2019-12-09 US US17/042,417 patent/US11625454B2/en active Active
- 2019-12-09 WO PCT/CN2019/124037 patent/WO2021114026A1/zh active Application Filing
- 2019-12-09 CN CN201980002893.4A patent/CN113168729B/zh active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090092486A (ko) * | 2008-02-27 | 2009-09-01 | 성균관대학교산학협력단 | 3차원 회전대칭형 물체의 자가 모델링 방법 및 장치 |
CN105160344A (zh) * | 2015-06-18 | 2015-12-16 | 北京大学深圳研究生院 | 一种三维点云的局部特征提取方法及装置 |
CN106780459A (zh) * | 2016-12-12 | 2017-05-31 | 华中科技大学 | 一种三维点云数据自动配准方法 |
CN107274423A (zh) * | 2017-05-26 | 2017-10-20 | 中北大学 | 一种基于协方差矩阵和投影映射的点云特征曲线提取方法 |
CN109215129A (zh) * | 2017-07-05 | 2019-01-15 | 中国科学院沈阳自动化研究所 | 一种基于三维点云的局部特征描述方法 |
CN110211163A (zh) * | 2019-05-29 | 2019-09-06 | 西安财经学院 | 一种基于epfh特征的点云匹配算法 |
CN110335297A (zh) * | 2019-06-21 | 2019-10-15 | 华中科技大学 | 一种基于特征提取的点云配准方法 |
Also Published As
Publication number | Publication date |
---|---|
US11625454B2 (en) | 2023-04-11 |
CN113168729A (zh) | 2021-07-23 |
US20220343105A1 (en) | 2022-10-27 |
CN113168729B (zh) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6681729B2 (ja) | オブジェクトの3d姿勢およびオブジェクトのランドマーク点の3dロケーションを求める方法、およびオブジェクトの3d姿勢およびオブジェクトのランドマークの3dロケーションを求めるシステム | |
WO2019042232A1 (zh) | 一种快速鲁棒的多模态遥感影像匹配方法和系统 | |
WO2021114026A1 (zh) | 一种基于局部参考坐标系的3d形状匹配方法及装置 | |
CN108052942B (zh) | 一种飞机飞行姿态的视觉图像识别方法 | |
Campbell et al. | Globally-optimal inlier set maximisation for camera pose and correspondence estimation | |
Tang et al. | Camera self-calibration from tracking of moving persons | |
Hu et al. | An automatic 3D registration method for rock mass point clouds based on plane detection and polygon matching | |
WO2021082380A1 (zh) | 一种基于激光雷达的托盘识别方法、系统和电子设备 | |
Cheng et al. | An automatic and robust point cloud registration framework based on view-invariant local feature descriptors and transformation consistency verification | |
Pan et al. | Robust partial-to-partial point cloud registration in a full range | |
Zhou et al. | A new algorithm for the establishing data association between a camera and a 2-D LIDAR | |
Zhong et al. | Triple screening point cloud registration method based on image and geometric features | |
CN115239899A (zh) | 位姿图生成方法、高精地图生成方法和装置 | |
CN113033270B (zh) | 采用辅助轴的3d物体局部表面描述方法、装置及存储介质 | |
Lin et al. | 6D object pose estimation with pairwise compatible geometric features | |
Zhou et al. | Neighbor feature variance (NFV) based feature point selection method for three dimensional (3D) registration of space target | |
Jiao et al. | A smart post-rectification algorithm based on an ANN considering reflectivity and distance for indoor scenario reconstruction | |
Wang et al. | Support-plane estimation for floor detection to understand regions' spatial organization | |
Recker et al. | Depth data assisted structure-from-motion parameter optimization and feature track correction | |
WO2021114027A1 (zh) | 一种基于sgh描述3d局部特征的3d形状匹配方法及装置 | |
Kaushik et al. | Polygon-based 3D scan registration with dual-robots in structured indoor environments | |
Fu | 3D point cloud data splicing algorithm based on feature corner model | |
Liu et al. | Point cloud registration leveraging structural regularity in Manhattan world | |
Xie et al. | DXICP: A Fast Registration Algorithm for Point Cloud | |
Dong et al. | Optimization algorithm of RGB-D SLAM visual odometry based on triangulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19955902 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.10.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19955902 Country of ref document: EP Kind code of ref document: A1 |