CN111553409A - Point cloud identification method based on voxel shape descriptor - Google Patents

Point cloud identification method based on voxel shape descriptor Download PDF

Info

Publication number
CN111553409A
CN111553409A CN202010340995.1A CN202010340995A CN111553409A CN 111553409 A CN111553409 A CN 111553409A CN 202010340995 A CN202010340995 A CN 202010340995A CN 111553409 A CN111553409 A CN 111553409A
Authority
CN
China
Prior art keywords
point cloud
point
normal
points
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010340995.1A
Other languages
Chinese (zh)
Other versions
CN111553409B (en
Inventor
陆军
朱波
华博文
陈坤
韦攀毅
王茁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202010340995.1A priority Critical patent/CN111553409B/en
Publication of CN111553409A publication Critical patent/CN111553409A/en
Application granted granted Critical
Publication of CN111553409B publication Critical patent/CN111553409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of computer vision and three-dimensional measurement, and particularly relates to a point cloud identification method based on a voxel shape descriptor. The normal estimation method based on the improved PCA reduces the influence degree of noise on the normal, effectively extracts the local neighborhood information of points, performs non-maximum suppression on the variance of the local normal, and the extracted key points have the characteristics of high identification degree and low overlap. The method takes the variance of the neighborhood normal of the key points as a significant value, and calculates the intersection of the significant values of the key points of the source point cloud and the target point cloud to preliminarily extract the intersection of the key points and accelerate the feature matching. The invention provides a voxel shape descriptor, which is suitable for describing a large-range neighborhood of dense point cloud by mapping neighborhood points of key points to a local coordinate system of the key points, counting the three-dimensional spatial distribution of the neighborhood points and quickly calculating the feature descriptor.

Description

Point cloud identification method based on voxel shape descriptor
Technical Field
The invention belongs to the technical field of computer vision and three-dimensional measurement, and particularly relates to a point cloud identification method based on a voxel shape descriptor.
Background
Target recognition in a scene is a basic research in the field of computer vision, and has important application values in numerous fields, such as intelligent monitoring, automatic assembly, mobile operation, robots, medical analysis and the like. Compared with a traditional two-dimensional image, the three-dimensional point cloud can provide more geometric information and is not subjected to rotation and illumination. Thus, the three-dimensional point cloud has potential advantages in dealing with the object recognition problem. In addition, with the rapid development of the three-dimensional scanning technology, the acquisition of the point cloud data becomes convenient and fast. This makes three-dimensional object recognition represent its irreplaceable value in the field of computer vision. ) The object recognition method proposed at present often needs to extract local features of an object from a large amount of point cloud data. Due to the fact that point cloud scene data have the characteristics of large range, large scale and high volume, each object comprises a large number of local features, and each local feature corresponds to a high-dimensional description vector, so that the problems of large calculation amount, low calculation efficiency and the like are caused. On the other hand, actually measured three-dimensional point cloud scene data is susceptible to noise, and has the defects of uneven density distribution and incapability of ensuring the accuracy of calculation of differential geometric features such as normal vectors, curvatures and the like. Aiming at the problems, the invention provides a point cloud identification method based on a voxel shape descriptor, and the influence degree of noise on a normal is reduced through a point cloud normal calculation method of multi-scale neighborhood and Euclidean distance weighting. In addition, by selecting key points and improving the construction of descriptors in various aspects, the identification and calculation speed and accuracy are effectively improved.
Disclosure of Invention
The invention aims to provide a point cloud identification method based on a voxel shape descriptor, which reduces the influence degree of noise on a normal line and improves the identification calculation speed and accuracy.
The purpose of the invention is realized by the following technical scheme: the method comprises the following steps:
step 1: carrying out normal estimation on points in the point cloud by a Euclidean distance weighted principal component analysis method;
step 2: correcting the obtained normal line by using a normal direction ambiguity judging algorithm generated based on the region;
step 2.1: calculating a minimum bounding box of the point cloud to be detected, and determining coordinate ranges of the point cloud in x, y and z dimensions;
step 2.2: dividing the minimum bounding box into a plurality of small cubes with the same size, wherein the size of each small cube is set according to the resolution ratio of the point cloud; establishing index numbers for the microcubes according to the spatial sequence of the microcubes, namely the sequence of x, y and z from small to large, and marking the microcubes without points inside as invalid;
step 2.3: calculating effective cube viCenter of mass of
Figure BDA0002468483430000011
And the centroid of the entire point cloud
Figure BDA0002468483430000012
The linear equation formed;
Figure BDA0002468483430000013
wherein
Figure BDA0002468483430000014
Are respectively the center of mass
Figure BDA0002468483430000015
Is determined by the coordinate of (a) in the space,
Figure BDA0002468483430000016
are respectively the center of mass
Figure BDA0002468483430000017
The coordinates of (a);
step 2.4: setting an error threshold e according to the point cloud resolution, and constructing an intersection point set J;
if small cube viIf the point (x, y, z) in (1) satisfies the following formula, the small cube v is judged to beiThe point (x, y, z) in (1) intersects the straight line, and the intersection is designated aspjAll the intersection points satisfying the following formula form a set J;
Figure BDA0002468483430000021
wherein,
Figure BDA0002468483430000022
step 2.5: when all the points p in the intersection set JjWith the point cloud centroid
Figure BDA0002468483430000023
Is less than the centroid
Figure BDA0002468483430000024
And the center of mass
Figure BDA0002468483430000025
At a distance of (d), the small cube viMarked as "known normal direction" with centroid
Figure BDA0002468483430000026
As viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is obtuse, the normal direction is kept, and if the angle is acute, the normal is reversed;
when all the points p in the intersection set JjWith the point cloud centroid
Figure BDA0002468483430000027
Is greater than the center of mass
Figure BDA0002468483430000028
And the center of mass
Figure BDA0002468483430000029
At a distance of (d), the small cube viMarked as "known normal direction" with centroid
Figure BDA00024684834300000210
As viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is acute, the normal direction is kept, and if the angle is obtuse, the normal is reversed;
when the point in the intersection point set J is the point cloud centroid
Figure BDA00024684834300000211
Is already less than the centroid
Figure BDA00024684834300000212
And the center of mass
Figure BDA00024684834300000213
Has a distance greater than the center of mass
Figure BDA00024684834300000214
And the center of mass
Figure BDA00024684834300000215
In the case of distance, the small cube viLabeled "unknown normal direction"; determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viThe normal lines of all the points are consistent with the normal lines of the adjacent small cubes, namely the included angle between the two is kept to be an acute angle, and if the included angle is an obtuse angle, the normal lines need to be reversed; if the adjacent small cube is marked as the unknown normal direction, continuously searching the adjacent small cube of the adjacent small cube until the adjacent small cube marked as the known normal direction is found, and correcting all the small cubes marked as the unknown normal direction found before according to the normal direction of the small cubes;
and step 3: calculating a final normal by using a multi-scale normal fusion algorithm;
and 4, step 4: selecting pre-key points through a curvature threshold, and screening out key points of model point cloud and target point cloud by taking a neighborhood normal variance as a significant value;
step 4.1: setting a curvature threshold value, and enabling the curvature to be larger than the curvature threshold valueAs the pre-key point peDiscarding points with curvature smaller than a curvature threshold as plane points;
step 4.2: calculating a significant value l of each pre-key point;
Figure BDA00024684834300000216
wherein,
Figure BDA0002468483430000031
is a pre-key point peA normal to the neighborhood point; m is peNeighborhood points;
Figure BDA0002468483430000032
is the normal mean of all points in the neighborhood;
step 4.3: judging whether the significant value of the pre-key point is the maximum value in the neighborhood, if not, changing the point into a non-pre-key point;
step 4.4: sequencing the significant values from small to large to obtain the minimum significant value of the pre-key points in the model point cloud
Figure BDA0002468483430000033
Maximum value of significance of pre-key points in model point cloud
Figure BDA0002468483430000034
Significance minimum value of pre-key points in scene point cloud
Figure BDA0002468483430000035
Significance maximum value of pre-key point in scene point cloud
Figure BDA0002468483430000036
Step 4.5: taking the intersection of the scene point cloud and the model point cloud significant value interval
Figure BDA0002468483430000037
Taking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key pointsRespectively using
Figure BDA0002468483430000038
And
Figure BDA0002468483430000039
represents;
and 5: calculating the three-dimensional spatial distribution of the neighborhood points by mapping the neighborhood points of the key points to the local coordinate system of the key points, and constructing a voxel shape descriptor;
step 6: and searching the corresponding relation between the model point cloud and the scene point cloud through distance weighted Hough voting to complete point cloud target identification.
The present invention may further comprise:
the method for finding the corresponding relationship between the model point cloud and the scene point cloud through distance weighted hough voting in the step 6 specifically comprises the following steps:
step 6.1: searching scene point cloud key points
Figure BDA00024684834300000310
Nearest point based on Euclidean distance of feature descriptors in model point cloud
Figure BDA00024684834300000311
Setting a distance threshold value, if the Euclidean distance between the two characteristic descriptors is less than the distance threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the distance threshold value
Figure BDA00024684834300000312
As a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated;
step 6.2: continuously searching key points
Figure BDA00024684834300000313
Possible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it will
Figure BDA00024684834300000314
As a group of initial corresponding relations, continuously calculating third neighbors ordered according to Euclidean distances of the feature descriptors, repeatedly iterating, and finding all corresponding relations meeting a distance threshold value as the initial corresponding relations;
step 6.3: calculating each key point in model point cloud
Figure BDA00024684834300000315
And center of gravity c of modelmGlobal reference vector of
Figure BDA00024684834300000316
Figure BDA00024684834300000317
Step 6.4: the obtained global reference vector
Figure BDA00024684834300000318
Converting the local coordinate system of the key point to obtain a model local reference vector
Figure BDA00024684834300000319
Figure BDA0002468483430000041
Wherein,
Figure BDA0002468483430000042
is a rotation transformation matrix;
Figure BDA0002468483430000043
Figure BDA0002468483430000044
are respectively key points
Figure BDA0002468483430000045
And unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.5: each key point in the model point cloud
Figure BDA0002468483430000046
To corresponding points in the scene point cloud
Figure BDA0002468483430000047
Forming a plurality of corresponding local reference vectors; because the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal:
step 6.6: converting the local reference vector obtained in the scene into a global coordinate system of the scene point cloud to obtain a global vector
Figure BDA0002468483430000048
Figure BDA0002468483430000049
Figure BDA00024684834300000410
Wherein,
Figure BDA00024684834300000411
in order to be a matrix of rotations,
Figure BDA00024684834300000412
Figure BDA00024684834300000413
are respectively key points
Figure BDA00024684834300000414
And unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.7: dividing the scene point cloud into voxels with equal size, if the global vector
Figure BDA00024684834300000415
If the end point of the point falls in a certain voxel, adding 1 to the voxel ticket value; according to the distance, the increased voting value core is calculated according to the following formula:
Figure BDA00024684834300000416
step 6.8: searching the voxel with the most votes, keeping the corresponding relation related to the voxel as a final corresponding relation, and eliminating other corresponding relations;
step 6.9: setting a ratio threshold, identifying model point clouds in the scene point clouds when the ratio of the final corresponding relation to the initial corresponding relation is larger than the ratio threshold, solving a conversion relation through the final corresponding relation by using a random sampling consistency registration algorithm, converting the model point clouds into the scene point clouds, and marking the point closest to the converted point in the scene point clouds as an identified result.
The method for constructing the voxel shape descriptor in the step 5 specifically comprises the following steps:
step 5.1: after the feature vectors of the key points are subjected to unit orthogonalization, the key points are used as coordinate origin points, and the maximum feature value lambda is used1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3Establishing a local coordinate system of the key points for the z axis;
step 5.2: under a local coordinate system of the key points, a cube with the side length of 25 × s is established by taking the key points as centers;
step 5.3: the length, width and height of the cube are averagely divided into 5 parts to form 125 small cube subspaces;
step 5.4: counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the sequence of xyz; and traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
The method for estimating the normal of the point in the point cloud by the Euclidean distance weighted principal component analysis method in the step 1 specifically comprises the following steps: the covariance matrix is improved, and a distance weight coefficient w is added, as shown in the following formula:
Figure BDA0002468483430000051
wherein the distance weight coefficient
Figure BDA0002468483430000052
r is the neighborhood radius;
Figure BDA0002468483430000053
is piA neighborhood point of (d); m is the number of neighborhood points;
Figure BDA0002468483430000054
represents a point piThe center of gravity of the neighborhood;
Figure BDA0002468483430000055
is a point
Figure BDA0002468483430000056
And point piThe distance of (d); to E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
The method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
Figure BDA0002468483430000057
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used. .
The invention has the beneficial effects that:
the normal estimation method based on the improved PCA reduces the influence degree of noise on the normal, effectively extracts the local neighborhood information of points, performs non-maximum suppression on the variance of the local normal, and the extracted key points have the characteristics of high identification degree and low overlap. The method takes the variance of the neighborhood normal of the key points as a significant value, and calculates the intersection of the significant values of the key points of the source point cloud and the target point cloud to preliminarily extract the intersection of the key points and accelerate the feature matching. The invention provides a voxel shape descriptor, which is suitable for describing a large-range neighborhood of dense point cloud by mapping neighborhood points of key points to a local coordinate system of the key points, counting the three-dimensional spatial distribution of the neighborhood points and quickly calculating the feature descriptor. The nearest neighbor of the feature descriptors of the key points is used for selecting an initial matching relation, and then the model point cloud is searched in the scene point cloud through improved Hough voting for identification, so that the identification rate is increased, and the identification accuracy is improved.
Drawings
FIG. 1 is a diagram of a point cloud identification model according to an embodiment of the invention.
FIG. 2 is a graph comparing results before and after normal redirection in an embodiment of the present invention.
FIG. 3 is a distribution diagram of key points under Gaussian noise according to an embodiment of the present invention.
Fig. 4 is a diagram of the initial correspondence and the final correspondence in the embodiment of the present invention.
FIG. 5 is a diagram illustrating a multi-target identification mapping according to an embodiment of the present invention.
Fig. 6 is a pose estimation result diagram in the embodiment of the present invention.
FIG. 7 is a table of experimental data in an example of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention aims to disclose a point cloud identification method based on a voxel shape descriptor. First, a point cloud normal is calculated by a Principal Component Analysis (PCA) method of euclidean distance weighting, the normal obtained is corrected using a normal direction ambiguity decision algorithm based on region generation, and a final normal is calculated using a multi-scale normal fusion algorithm. Then, pre-key points are selected through a curvature threshold value, the neighborhood normal variance is used as a significant value, and key points of the model point cloud and the target point cloud are screened out. Then, the three-dimensional space distribution of the neighborhood points is counted by mapping the neighborhood points of the key points to the local coordinate system of the key points, and the voxel shape descriptor is constructed. And finally, searching the corresponding relation between the model point cloud and the scene point cloud through distance weighted Hough voting, and further completing point cloud target identification. The recognition method provided by the invention effectively reduces the influence degree of noise on the normal line, and improves the recognition calculation speed and accuracy. In addition, good recognition performance can still be kept for complex scenes. According to the method, the influence degree of noise on the normal is reduced by a point cloud normal calculation method with multi-scale neighborhood and Euclidean distance weighting. In addition, by selecting key points and improving the construction of descriptors in various aspects, the identification and calculation speed and accuracy are effectively improved.
The method comprises the following steps:
step 1: and (6) estimating a normal line.
Step 1.1: firstly, the covariance matrix is improved, and a distance weight coefficient w is added, as shown in formula (1):
Figure BDA0002468483430000061
wherein
Figure BDA0002468483430000062
r is the radius of the neighborhood region,
Figure BDA0002468483430000063
is piM is the number of neighborhood points,
Figure BDA0002468483430000064
represents a point piThe center of gravity of the neighborhood is,
Figure BDA0002468483430000065
is a point
Figure BDA0002468483430000066
And point piThe distance of (c). To E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
Step 1.2: in order to solve the ambiguity problem of normal estimation (namely the direction of the normal to be calculated is possibly opposite to the direction of the real normal and the normal direction of the whole point cloud cannot be consistent), a normal direction ambiguity judgment algorithm based on region generation is provided to redirect the normal.
Firstly, calculating a minimum bounding box for point cloud to be detected, and determining a coordinate range x of the point cloud in three dimensions of x, y and zmin、xmax、ymin、ymax、zmin、zmaxAnd dividing the minimum bounding box into a plurality of small cubes with the same size, wherein the size of each small cube is set according to the resolution ratio of the point cloud. Index numbers are established for the microcubes according to the spatial order of the microcubes (the order of small to large x, y and z), and the microcubes which do not contain points inside are marked as invalid.
Then, for the point to be calculated, the small cube v containing the pointiThe centroid of the inner point cloud is
Figure BDA0002468483430000067
Centroid of the whole point cloud
Figure BDA0002468483430000068
Figure BDA0002468483430000069
And
Figure BDA00024684834300000610
the equation of the straight line is shown in formula (1):
Figure BDA00024684834300000611
wherein
Figure BDA0002468483430000071
Are respectively the center of mass
Figure BDA0002468483430000072
Is determined by the coordinate of (a) in the space,
Figure BDA0002468483430000073
are respectively the center of mass
Figure BDA0002468483430000074
The coordinates of (a). Respectively by k1、k2、k3Three parts of a straight line equation are expressed as shown in formula (2), formula (3) and formula (4):
Figure BDA0002468483430000075
Figure BDA0002468483430000076
Figure BDA0002468483430000077
a small cube viThe midpoint coordinate is taken into formula (2), formula (3) and formula (4), and k is obtained1、k2、k3Setting an error threshold e (set according to the resolution of the point cloud), if k1、k2、k3If equation (5) is satisfied, the small cube v is considerediThis point (x, y, z) in (a) intersects the straight line, and the intersection point is denoted as pjAll the intersections satisfying the formula (5) form a set J.
Figure BDA0002468483430000078
Finally, the normal direction is determined in three cases. When all the points p in the intersection set JjWith the point cloud centroid
Figure BDA0002468483430000079
Is less than the centroid
Figure BDA00024684834300000710
And the center of mass
Figure BDA00024684834300000711
And (c) and the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of mass
Figure BDA00024684834300000712
As viewpoint, the viewpoint is calculated to the small cube viThe angle between the connecting line of each point and the normal of the point is kept in the normal direction if the angle is obtuse, and the normal is reversed if the angle is acute. When all the points p in the intersection set JjWith the point cloud centroid
Figure BDA00024684834300000713
Is greater than the center of mass
Figure BDA00024684834300000714
And the center of mass
Figure BDA00024684834300000715
At a distance of (d), the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of mass
Figure BDA00024684834300000716
As viewpoint, the viewpoint is calculated to the small cube viThe angle between the connecting line of each point and the normal of the point is kept in the normal direction if the angle is acute, and the normal is reversed if the angle is obtuse. When the point in the intersection point set J is the point cloud centroid
Figure BDA00024684834300000717
Is already less than the centroid
Figure BDA00024684834300000718
And the center of mass
Figure BDA00024684834300000719
Has a distance greater than the center of mass
Figure BDA00024684834300000720
And the center of mass
Figure BDA00024684834300000721
In the case of distance, the small cube viThe label is "unknown normal direction" and the normal direction is determined as follows: determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viThe normal of all points should be consistent with the normal of the adjacent small cube, i.e. the included angle between the two is kept as an acute angle, and if the included angle is an obtuse angle, the normal needs to be reversed. If the adjacent small cube is marked as 'unknown normal direction', the search for the adjacent small cube of the adjacent small cube is continued until the adjacent small cube marked as 'known normal direction' is found, and all the previously found small cubes marked as 'unknown normal direction' are corrected according to the normal direction thereof.
Step 1.3: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3The weighted average of normals calculated based on the 3 neighborhood radii is given by equation (6) with s at 7 × s as the point cloud resolution, and the average n is taken as the final normal.
Figure BDA0002468483430000081
Wherein n is1、n2、n3Is a radius r1、r2、r3The normal determined in step 1.1 and step 1.2.
Step 2: and (5) searching key points.
Step 2.1: setting a threshold value, and taking a point with curvature larger than the threshold value as a pre-key point pePoints with a curvature smaller than the threshold are discarded as plane points.
Step 2.2: and (3) taking the normal variance in the point neighborhood as a significant value l of the pre-key point, as shown in a formula (7). The saliency value of the non-pre-keypoints is set to 0, and the saliency value of each pre-keypoint is calculated.
Figure BDA0002468483430000082
Wherein
Figure BDA0002468483430000083
Is a pre-key point peNormal to the neighborhood point, m being peThe number of neighborhood points,
Figure BDA0002468483430000084
is the normal average of all points in the neighborhood.
Step 2.3: and judging whether the significant value of the pre-key point is the maximum value in the neighborhood, if not, discarding the point, namely changing the point into a non-pre-key point.
Step 2.4: the saliency values are sorted from small to large,
Figure BDA0002468483430000085
is the minimum value of the significance values of the pre-key points of the model point cloud,
Figure BDA0002468483430000086
and predicting the maximum value of the significant value of the key point for the model point cloud. Calculating the significance value of the scene point cloud pre-key point,
Figure BDA0002468483430000087
is the minimum value of the significance values of the pre-key points of the scene point cloud,
Figure BDA0002468483430000088
and predicting the maximum value of the significance value of the scene point cloud. Taking the intersection of the scene point cloud and the model point cloud significant value interval
Figure BDA0002468483430000089
Taking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key points, and respectively using the key points
Figure BDA00024684834300000810
And
Figure BDA00024684834300000811
and (4) showing.
And step 3: and calculating a characteristic descriptor.
Step 3.1: and establishing a local coordinate system of the key points by taking the key points as an origin. Performing unit orthogonalization on the feature vectors of the key points, wherein the maximum feature value lambda is1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3For the z-axis, the keypoints are taken as the origin of coordinates.
Step 3.2: and under a local coordinate system, a cube with the side length of 25 × s is established by taking the key point as the center, and s is the resolution.
Step 3.3: the length, width and height of the cube are averagely divided into 5 parts to form 125 small cube subspaces
Step 3.4: and counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the sequence of xyz. And traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
And 4, step 4: and (4) point cloud identification.
Step 4.1: searching scene point cloud key points
Figure BDA0002468483430000091
Nearest point based on Euclidean distance of feature descriptors in model point cloud
Figure BDA0002468483430000092
Setting a threshold value, if the Euclidean distance between the two characteristic descriptors is less than the threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the threshold value
Figure BDA0002468483430000093
As a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated. Continuously searching key points
Figure BDA0002468483430000094
Possible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it will
Figure BDA0002468483430000095
And as a group of initial corresponding relations, continuously calculating third neighbors ordered according to the Euclidean distance of the feature descriptors, and repeatedly iterating to find all corresponding relations meeting the threshold value as the initial corresponding relations.
Step 4.2: first, for each key point in the model
Figure BDA0002468483430000096
Calculate its and model center of gravity cmGlobal reference vector of
Figure BDA0002468483430000097
As shown in formula (8):
Figure BDA0002468483430000098
the obtained global reference vector
Figure BDA0002468483430000099
Converting the local coordinate system of the key point into a local coordinate system of the key point, and calculating by using the formula (9) to obtain a model local reference vector
Figure BDA00024684834300000910
Figure BDA00024684834300000911
Wherein
Figure BDA00024684834300000912
For the rotational transformation matrix:
Figure BDA00024684834300000913
Figure BDA00024684834300000914
are respectively key points
Figure BDA00024684834300000915
And unit feature vectors under a local coordinate system constructed by the points in the neighborhood.
Then, according to the one-to-many corresponding relation obtained in the step 4.1, each key point in the point cloud model is processed
Figure BDA00024684834300000916
To corresponding points in the scene point cloud
Figure BDA00024684834300000917
A plurality of corresponding local reference vectors are formed. Because the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal:
Figure BDA00024684834300000918
converting the local reference vector obtained in the scene into a global coordinate system of the scene point cloud by using the formula (12) to obtain a global vector
Figure BDA00024684834300000919
Figure BDA00024684834300000920
Figure BDA00024684834300000921
Is a rotation matrix, as shown in equation (13):
Figure BDA00024684834300000922
Figure BDA0002468483430000101
are respectively key points
Figure BDA0002468483430000102
And unit feature vectors under a local coordinate system constructed by the points in the neighborhood.
Step 4.3: dividing the scene point cloud into voxels with equal size, if the global vector
Figure BDA0002468483430000103
Falls within a voxel, the voxel ticket value is incremented by 1. Because the key point farther from the center of gravity is influenced by noise more greatly, and the longer the vector from the key point to the center of gravity is, the larger the error of the position of the center of gravity is, on the basis of the traditional Hough voting, the Hough voting based on distance weighting is proposed: the increased voting value core is calculated according to the distance according to the formula (14):
Figure BDA0002468483430000104
the number of votes falling into each voxel is counted.
Step 4.4: and searching the voxel with the most votes, keeping the corresponding relation related to the voxel as a final corresponding relation, and rejecting other corresponding relations. And setting a threshold, and identifying that the model point cloud exists in the scene point cloud when the ratio of the final corresponding relation to the initial corresponding relation is greater than the threshold. And then, solving a conversion relation through a final corresponding relation by using a random sampling consistency (RANSAC) registration algorithm, converting the model point cloud into the scene point cloud, and marking a point in the scene point cloud which is closest to the converted point as an identified result.
The method has the advantages that the influence degree of noise on the normal is reduced based on the normal estimation method of the improved PCA. The key point searching algorithm based on the neighborhood normal variance is provided, the local neighborhood information of the point is effectively extracted, the local normal variance is subjected to non-maximum suppression, and the extracted key points have the characteristics of high identification degree and low overlapping. Taking the variance of the neighborhood normal of the key points as a significant value, and solving the intersection of the significant values of the key points of the source point cloud and the target point cloud to preliminarily extract the intersection of the key points and accelerate the feature matching. The voxel shape descriptor is provided, the three-dimensional space distribution of the neighborhood points is counted by mapping the neighborhood points of the key points to the local coordinate system of the key points, the feature descriptor is rapidly calculated, and the method is suitable for describing the large-range neighborhood of the dense point cloud. The nearest neighbor of the feature descriptors of the key points is used for selecting an initial matching relation, and then the model point cloud is searched in the scene point cloud through improved Hough voting for identification, so that the identification rate is increased, and the identification accuracy is improved.
Example 1:
the point cloud model data used in the present invention are shown in fig. 1, and include 16 types, such as armadillo, bunny, cat, centaur, cheff, chicken, david, dog, dragon, face, ganesha, gorilla, horse, para, trex, wolf, and the like, and also include scene point clouds of the respective models, and scene point cloud combinations.
FIG. 2 shows comparison results before and after normal reorientation, where (a) and (c) are non-oriented normals and (b) and (d) are oriented normals.
FIG. 3 is a distribution of key points under Gaussian noise, where (a) is in the case of no noise; (b) under the condition of 0.1 time of resolution; (c) at 0.2 times resolution; (d) at 0.3 times resolution; (e) at 0.4 times the resolution.
FIG. 4 shows an initial correspondence and a final correspondence, wherein (a) is a cheff initial correspondence; (b) is cheff corresponding relation; (c) is the initial corresponding relation of ganesha; (d) is a ganesha corresponding relationship.
FIG. 5 is a correspondence of multi-target identification, wherein (a) is a bunny correspondence; (b) is the corresponding relation of armadillo; (c) is a ganesha corresponding relationship.
The method is implemented according to the following steps:
step 1: and (3) normal vector estimation:
step 1.1: and (3) carrying out normal estimation on points in the model point cloud and the scene point cloud by using a Principal Component Analysis (PCA).
Step 1.2: representing the point cloud as several small cubes v of equal sizeiThe size of the small cube is according toAnd setting the resolution of the point cloud. The point clouds inside the small cube are ordered so that the index neighbors are also adjacent in space. And establishing a kd-tree for the point cloud in the small cube, randomly selecting one point as a starting point, searching the closest point of the starting point in the kd-tree as a 2 nd point, searching the closest point of the current point from the rest points by using the 2 nd point as the current point as a 3 rd point, and so on until the point cloud in the small cube is sorted. Calculating a small cube v of an interior containing pointiCentroid of inner point cloud
Figure BDA0002468483430000111
And the centroid of the entire point cloud
Figure BDA0002468483430000112
And find
Figure BDA0002468483430000113
And
Figure BDA0002468483430000114
a linear equation is formed. Judgment small cube viThe intersection point of this point (x, y, z) and the straight line is denoted as pjThe set of intersections is J.
When all the points p in the intersection set JjWith the point cloud centroid
Figure BDA0002468483430000115
Is less than the centroid
Figure BDA0002468483430000116
And the center of mass
Figure BDA0002468483430000117
And (c) and the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of mass
Figure BDA0002468483430000118
As viewpoint, the viewpoint is calculated to the small cube viOf the line connecting each point with the normal of that pointThe angle is such that the normal direction is maintained at an obtuse angle and the normal direction is reversed at an acute angle. When all the points p in the intersection set JjWith the point cloud centroid
Figure BDA0002468483430000119
Is greater than the center of mass
Figure BDA00024684834300001110
And the center of mass
Figure BDA00024684834300001111
At a distance of (d), the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of mass
Figure BDA00024684834300001112
As viewpoint, the viewpoint is calculated to the small cube viThe angle between the connecting line of each point and the normal of the point is kept in the normal direction if the angle is acute, and the normal is reversed if the angle is obtuse. When the point in the intersection point set J is the point cloud centroid
Figure BDA00024684834300001113
Is already less than the centroid
Figure BDA00024684834300001114
And the center of mass
Figure BDA00024684834300001115
Has a distance greater than the center of mass
Figure BDA00024684834300001116
And the center of mass
Figure BDA00024684834300001117
In the case of distance, the small cube viThe label is "unknown normal direction" and the normal direction is determined as follows: determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viNormal to all pointsShould be consistent with the normal of the adjacent small cube, i.e. the included angle between the two is kept as an acute angle, and if it is an obtuse angle, the normal is reversed. If the adjacent small cube is marked as 'unknown normal direction', the search for the adjacent small cube of the adjacent small cube is continued until the adjacent small cube marked as 'known normal direction' is found, and all the previously found small cubes marked as 'unknown normal direction' are corrected according to the normal direction thereof.
Step 1.3: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3And 7s is the resolution of the point cloud, the normal calculated based on the 3 neighborhood radii is weighted to obtain the average value, and the average value n is used as the final normal.
Fig. 2 shows the comparison results before and after normal redirection, where the neighborhood radius and distance threshold are 5 times the resolution, and the voxel size is 10 times the resolution. Grey is a point and black is the normal direction, the normals of the same surface are nearly parallel. The undirected normal estimation generates ambiguity at the normal estimation at the inflection point, and the oriented normal is more in line with the normal direction of the whole point cloud.
Step 2: key point search
Firstly, setting a threshold value, and taking a point with curvature larger than the threshold value as a pre-key point pe. And then, taking the normal variance in the point neighborhood as a significant value l of the pre-key point, as shown in formula (8). The saliency value of the non-pre-keypoints is set to 0, and the saliency value of each pre-keypoint is calculated. Then, whether the significant value of the pre-key point is the maximum value in the neighborhood is judged, if not, the point is discarded, and the point is changed into a non-pre-key point. The saliency values are finally sorted from small to large,
Figure BDA0002468483430000121
is the minimum value of the significance values of the pre-key points of the model point cloud,
Figure BDA0002468483430000122
and predicting the maximum value of the significant value of the key point for the model point cloud. Calculating the significance value of the scene point cloud pre-key point,
Figure BDA0002468483430000123
is the minimum value of the significance values of the pre-key points of the scene point cloud,
Figure BDA0002468483430000124
and predicting the maximum value of the significance value of the scene point cloud. Taking the intersection of the scene point cloud and the model point cloud significant value interval
Figure BDA0002468483430000125
Taking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key points, and respectively using the key points
Figure BDA0002468483430000126
And
Figure BDA0002468483430000127
and (4) showing.
Fig. 3 shows the distribution of the key points after gaussian noise with different multiplying powers is added, and under the influence of noise, most of the key points are still located at similar positions, so that the anti-noise capability of the recognition algorithm is enhanced.
And step 3: and calculating a characteristic descriptor.
Firstly, taking a key point as an origin, and performing unit orthogonalization on a feature vector of the key point, wherein the maximum feature value lambda is1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3And taking the key point as a coordinate origin for the z axis, and establishing a local coordinate system of the key point. And then, under a local coordinate system, a cube with the side length of 25 × s is established by taking the key point as the center, and s is the resolution. The length, width and height of the cube are then divided into 5 parts on average, forming 125 small cube subspaces. And finally counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the xyz sequence. And traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
And 4, step 4: point cloud identification
Step 4.1: firstly, the methodSearching scene point cloud key points
Figure BDA0002468483430000128
Nearest point based on Euclidean distance of feature descriptors in model point cloud
Figure BDA0002468483430000129
Setting a threshold value, if the Euclidean distance between the two characteristic descriptors is less than the threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the threshold value
Figure BDA00024684834300001210
As a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated. Continuously searching key points
Figure BDA00024684834300001211
Possible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it will
Figure BDA0002468483430000131
And as a group of initial corresponding relations, continuously calculating third neighbors ordered according to the Euclidean distance of the feature descriptors, and repeatedly iterating to find all corresponding relations meeting the threshold value as the initial corresponding relations.
Then, for each key point in the model, a global vector of the key point and the center of gravity of the model is calculated
Figure BDA0002468483430000132
And converting the global vector into a local coordinate system of the key point. Due to the fact that the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal. According to the initial corresponding relation, each key point in the model point cloud
Figure BDA0002468483430000133
Local vector transformation to corresponding points in the scene point cloud
Figure BDA0002468483430000134
The above. Converting the local vector obtained in the scene into a global coordinate system of the scene point cloud to obtain a global vector
Figure BDA0002468483430000135
Finally, the scene point cloud is divided into voxels with equal size, if the global vector is
Figure BDA0002468483430000136
Falls within a voxel, the voxel ticket value is incremented by 1. Because the key point farther from the center of gravity is influenced by noise more greatly, and the longer the vector from the key point to the center of gravity is, the larger the error of the position of the center of gravity is, on the basis of the traditional Hough voting, the Hough voting based on distance weighting is proposed: according to the distance, the added voting value core is:
Figure BDA0002468483430000137
the number of votes falling into each voxel is counted.
Step 4.2: finding the voxel with the most votes, keeping the corresponding relation related to the voxel as the final corresponding relation, and eliminating other corresponding relations. And setting a threshold, and identifying that the model point cloud exists in the scene point cloud when the ratio of the final corresponding relation to the initial corresponding relation is greater than the threshold. And then, solving a change relation between the model point cloud and the scene point cloud through a final corresponding relation by using a random sample consensus (RANSAC) registration algorithm, converting the model point cloud into the scene point cloud, and marking a point in the scene point cloud, which is closest to the converted point, as an identified result.
In fig. 4, (a) and (c) are initial corresponding relations, and (b) and (d) are correct corresponding relations after removing wrong corresponding relations, a large number of effective corresponding relations can be found by using a voxel shape descriptor and improving a hough voting matching algorithm, so that the robustness of the recognition algorithm is improved. Wherein the distance threshold of the initial correspondence is 0.01, and the voting voxels are 10 times the resolution. Fig. 5 shows the matching relationship of the recognition models in the complex scene point cloud, and the accuracy of the correspondence calculated by the improved hough voting is high, so that all objects in the scene can be recognized at the same time. Fig. 6 shows the improved pose estimation result of the hough voting based on the voxel shape descriptor, which has high precision and can accurately estimate the poses of a plurality of targets in a scene. Fig. 7 is experimental data of a point cloud identification algorithm based on a voxel shape descriptor, and time consumption and main parameters of each stage in the identification process are recorded in detail. The single target recognition time is about 1s, and the three targets recognition time is about 1.7 s.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A point cloud identification method based on voxel shape descriptors is characterized by comprising the following steps:
step 1: carrying out normal estimation on points in the point cloud by a Euclidean distance weighted principal component analysis method;
step 2: correcting the obtained normal line by using a normal direction ambiguity judging algorithm generated based on the region;
step 2.1: calculating a minimum bounding box of the point cloud to be detected, and determining coordinate ranges of the point cloud in x, y and z dimensions;
step 2.2: dividing the minimum bounding box into a plurality of small cubes with the same size, wherein the size of each small cube is set according to the resolution ratio of the point cloud; establishing index numbers for the microcubes according to the spatial sequence of the microcubes, namely the sequence of x, y and z from small to large, and marking the microcubes without points inside as invalid;
step 2.3: calculating effective cube viCenter of mass of
Figure FDA0002468483420000011
And the centroid of the entire point cloud
Figure FDA0002468483420000012
The linear equation formed;
Figure FDA0002468483420000013
wherein
Figure FDA0002468483420000014
Are respectively the center of mass
Figure FDA0002468483420000015
Is determined by the coordinate of (a) in the space,
Figure FDA0002468483420000016
are respectively the center of mass
Figure FDA0002468483420000017
The coordinates of (a);
step 2.4: setting an error threshold e according to the point cloud resolution, and constructing an intersection point set J;
if small cube viIf the point (x, y, z) in (1) satisfies the following formula, the small cube v is judged to beiThe point (x, y, z) in (1) intersects the straight line, and the intersection is denoted as pjAll the intersection points satisfying the following formula form a set J;
Figure FDA0002468483420000018
wherein,
Figure FDA0002468483420000019
step 2.5: when all the points p in the intersection set JjWith the point cloud centroid
Figure FDA00024684834200000110
Is less than the centroid
Figure FDA00024684834200000111
And the center of mass
Figure FDA00024684834200000112
At a distance of (d), the small cube viMarked as "known normal direction" with centroid
Figure FDA00024684834200000113
As viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is obtuse, the normal direction is kept, and if the angle is acute, the normal is reversed;
when all the points p in the intersection set JjWith the point cloud centroid
Figure FDA00024684834200000114
Is greater than the center of mass
Figure FDA00024684834200000115
And the center of mass
Figure FDA00024684834200000116
At a distance of (d), the small cube viMarked as "known normal direction" with centroid
Figure FDA00024684834200000117
As viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is acute, the normal direction is kept, and if the angle is obtuse, the normal is reversed;
when the point in the intersection point set J is the point cloud centroid
Figure FDA00024684834200000118
Is already less than the centroid
Figure FDA00024684834200000119
And the center of mass
Figure FDA00024684834200000120
Has a distance greater than the center of mass
Figure FDA00024684834200000121
And the center of mass
Figure FDA00024684834200000122
In the case of distance, the small cube viLabeled "unknown normal direction"; determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viThe normal lines of all the points are consistent with the normal lines of the adjacent small cubes, namely the included angle between the two is kept to be an acute angle, and if the included angle is an obtuse angle, the normal lines need to be reversed; if the adjacent small cube is marked as the unknown normal direction, continuously searching the adjacent small cube of the adjacent small cube until the adjacent small cube marked as the known normal direction is found, and correcting all the small cubes marked as the unknown normal direction found before according to the normal direction of the small cubes;
and step 3: calculating a final normal by using a multi-scale normal fusion algorithm;
and 4, step 4: selecting pre-key points through a curvature threshold, and screening out key points of model point cloud and target point cloud by taking a neighborhood normal variance as a significant value;
step 4.1: setting a curvature threshold value, and taking a point with the curvature larger than the curvature threshold value as a pre-key point peDiscarding points with curvature smaller than a curvature threshold as plane points;
step 4.2: calculating a significant value l of each pre-key point;
Figure FDA0002468483420000021
wherein,
Figure FDA0002468483420000022
is a pre-key point peA normal to the neighborhood point; m is peNeighborhood points;
Figure FDA0002468483420000023
is the normal mean of all points in the neighborhood;
step 4.3: judging whether the significant value of the pre-key point is the maximum value in the neighborhood, if not, changing the point into a non-pre-key point;
step 4.4: sequencing the significant values from small to large to obtain the minimum significant value of the pre-key points in the model point cloud
Figure FDA0002468483420000024
Maximum value of significance of pre-key points in model point cloud
Figure FDA0002468483420000025
Significance minimum value of pre-key points in scene point cloud
Figure FDA0002468483420000026
Significance maximum value of pre-key point in scene point cloud
Figure FDA0002468483420000027
Step 4.5: taking the intersection of the scene point cloud and the model point cloud significant value interval
Figure FDA0002468483420000028
Taking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key points, and respectively using the key points
Figure FDA0002468483420000029
And
Figure FDA00024684834200000210
represents;
and 5: calculating the three-dimensional spatial distribution of the neighborhood points by mapping the neighborhood points of the key points to the local coordinate system of the key points, and constructing a voxel shape descriptor;
step 6: and searching the corresponding relation between the model point cloud and the scene point cloud through distance weighted Hough voting to complete point cloud target identification.
2. The method of claim 1, wherein the method comprises: the method for finding the corresponding relationship between the model point cloud and the scene point cloud through distance weighted hough voting in the step 6 specifically comprises the following steps:
step 6.1: searching scene point cloud key points
Figure FDA00024684834200000211
Nearest point based on Euclidean distance of feature descriptors in model point cloud
Figure FDA00024684834200000212
Setting a distance threshold value, if the Euclidean distance between the two characteristic descriptors is less than the distance threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the distance threshold value
Figure FDA00024684834200000213
As a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated;
step 6.2: continuously searching key points
Figure FDA0002468483420000031
Possible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it will
Figure FDA0002468483420000032
As a group of initial corresponding relations, continuously calculating third neighbors ordered according to Euclidean distances of the feature descriptors, repeatedly iterating, and finding all corresponding relations meeting a distance threshold value as the initial corresponding relations;
step 6.3: calculating each key point in model point cloud
Figure FDA0002468483420000033
And center of gravity c of modelmGlobal reference vector of
Figure FDA0002468483420000034
Figure FDA0002468483420000035
Step 6.4: the obtained global reference vector
Figure FDA0002468483420000036
Converting the local coordinate system of the key point to obtain a model local reference vector
Figure FDA0002468483420000037
Figure FDA0002468483420000038
Wherein,
Figure FDA0002468483420000039
is a rotation transformation matrix;
Figure FDA00024684834200000310
Figure FDA00024684834200000311
are respectively key points
Figure FDA00024684834200000312
And unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.5: each key point in the model point cloud
Figure FDA00024684834200000313
Of the local ginsengConversion of test vectors to corresponding points in a scene point cloud
Figure FDA00024684834200000314
Forming a plurality of corresponding local reference vectors; because the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal:
step 6.6: converting the local reference vector obtained in the scene into a global coordinate system of the scene point cloud to obtain a global vector
Figure FDA00024684834200000315
Figure FDA00024684834200000316
Figure FDA00024684834200000317
Wherein,
Figure FDA00024684834200000318
in order to be a matrix of rotations,
Figure FDA00024684834200000319
Figure FDA00024684834200000320
are respectively key points
Figure FDA00024684834200000321
And unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.7: dividing the scene point cloud into voxels with equal size, if the global vector
Figure FDA00024684834200000322
If the end point of the point falls in a certain voxel, adding 1 to the voxel ticket value; distance-dependent distanceThe added vote value core is calculated as follows:
Figure FDA00024684834200000323
step 6.8: searching the voxel with the most votes, keeping the corresponding relation related to the voxel as a final corresponding relation, and eliminating other corresponding relations;
step 6.9: setting a ratio threshold, identifying model point clouds in the scene point clouds when the ratio of the final corresponding relation to the initial corresponding relation is larger than the ratio threshold, solving a conversion relation through the final corresponding relation by using a random sampling consistency registration algorithm, converting the model point clouds into the scene point clouds, and marking the point closest to the converted point in the scene point clouds as an identified result.
3. A method of voxel shape descriptor based point cloud identification as claimed in claim 1 or 2, wherein: the method for constructing the voxel shape descriptor in the step 5 specifically comprises the following steps:
step 5.1: after the feature vectors of the key points are subjected to unit orthogonalization, the key points are used as coordinate origin points, and the maximum feature value lambda is used1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3Establishing a local coordinate system of the key points for the z axis;
step 5.2: under a local coordinate system of the key points, a cube with the side length of 25 × s is established by taking the key points as centers;
step 5.3: the length, width and height of the cube are averagely divided into 5 parts to form 125 small cube subspaces;
step 5.4: counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the sequence of xyz; and traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
4. A method of voxel shape descriptor based point cloud identification as claimed in claim 1 or 2, wherein: the method for estimating the normal of the point in the point cloud by the Euclidean distance weighted principal component analysis method in the step 1 specifically comprises the following steps: the covariance matrix is improved, and a distance weight coefficient w is added, as shown in the following formula:
Figure FDA0002468483420000041
wherein the distance weight coefficient
Figure FDA0002468483420000042
r is the neighborhood radius;
Figure FDA0002468483420000043
is piA neighborhood point of (d); m is the number of neighborhood points;
Figure FDA0002468483420000044
represents a point piThe center of gravity of the neighborhood;
Figure FDA0002468483420000045
is a point
Figure FDA0002468483420000046
And point piThe distance of (d); to E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
5. The method of claim 3, wherein the method comprises: the method for estimating the normal of the point in the point cloud by the Euclidean distance weighted principal component analysis method in the step 1 specifically comprises the following steps: the covariance matrix is improved, and a distance weight coefficient w is added, as shown in the following formula:
Figure FDA0002468483420000047
wherein the distance weight coefficient
Figure FDA0002468483420000048
r is the neighborhood radius;
Figure FDA0002468483420000049
is piA neighborhood point of (d); m is the number of neighborhood points;
Figure FDA00024684834200000410
represents a point piThe center of gravity of the neighborhood;
Figure FDA00024684834200000411
is a point
Figure FDA00024684834200000412
And point piThe distance of (d); to E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
6. A method of voxel shape descriptor based point cloud identification as claimed in claim 1 or 2, wherein: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
Figure FDA0002468483420000051
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
7. According to claim 3The point cloud identification method based on the voxel shape descriptor is characterized in that: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
Figure FDA0002468483420000052
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
8. The method of claim 4, wherein the method comprises: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
Figure FDA0002468483420000053
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
9. The method of claim 5, wherein the method comprises: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3=7*s,s is the resolution of the point cloud, the weighted average of the normals calculated based on the 3 neighborhood radii is calculated according to the following formula, and the average n is taken as the final normal;
Figure FDA0002468483420000054
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
CN202010340995.1A 2020-04-27 2020-04-27 Point cloud identification method based on voxel shape descriptor Active CN111553409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010340995.1A CN111553409B (en) 2020-04-27 2020-04-27 Point cloud identification method based on voxel shape descriptor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010340995.1A CN111553409B (en) 2020-04-27 2020-04-27 Point cloud identification method based on voxel shape descriptor

Publications (2)

Publication Number Publication Date
CN111553409A true CN111553409A (en) 2020-08-18
CN111553409B CN111553409B (en) 2022-11-01

Family

ID=72004431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010340995.1A Active CN111553409B (en) 2020-04-27 2020-04-27 Point cloud identification method based on voxel shape descriptor

Country Status (1)

Country Link
CN (1) CN111553409B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365592A (en) * 2020-11-10 2021-02-12 大连理工大学 Local environment feature description method based on bidirectional elevation model
CN112446952A (en) * 2020-11-06 2021-03-05 杭州易现先进科技有限公司 Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN112669385A (en) * 2020-12-31 2021-04-16 华南理工大学 Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics
CN112766037A (en) * 2020-12-14 2021-05-07 南京工程学院 3D point cloud target identification and positioning method based on maximum likelihood estimation method
CN113111741A (en) * 2021-03-27 2021-07-13 西北工业大学 Assembly state identification method based on three-dimensional feature points
CN113435256A (en) * 2021-06-04 2021-09-24 华中科技大学 Three-dimensional target identification method and system based on geometric consistency constraint
CN113469195A (en) * 2021-06-25 2021-10-01 浙江工业大学 Target identification method based on self-adaptive color fast point feature histogram
CN113807366A (en) * 2021-09-16 2021-12-17 电子科技大学 Point cloud key point extraction method based on deep learning
CN114118181A (en) * 2021-08-26 2022-03-01 西北大学 High-dimensional regression point cloud registration method, system, computer equipment and application
CN116416305A (en) * 2022-09-17 2023-07-11 上海交通大学 Multi-instance pose estimation method based on optimized sampling five-dimensional point pair characteristics

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810751A (en) * 2014-01-29 2014-05-21 辽宁师范大学 Three-dimensional auricle point cloud shape feature matching method based on IsoRank algorithm
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same
CN105910556A (en) * 2016-04-13 2016-08-31 中国农业大学 Leaf area vertical distribution information extraction method
US20170046868A1 (en) * 2015-08-14 2017-02-16 Samsung Electronics Co., Ltd. Method and apparatus for constructing three dimensional model of object
CN106846387A (en) * 2017-02-09 2017-06-13 中北大学 Point cloud registration method based on neighborhood rotary volume
CN108830888A (en) * 2018-05-24 2018-11-16 中北大学 Thick matching process based on improved multiple dimensioned covariance matrix Feature Descriptor
CN108898128A (en) * 2018-07-11 2018-11-27 宁波艾腾湃智能科技有限公司 A kind of method for anti-counterfeit and equipment matching digital three-dimemsional model by photo
CN109887015A (en) * 2019-03-08 2019-06-14 哈尔滨工程大学 A kind of point cloud autoegistration method based on local surface feature histogram
CN110222642A (en) * 2019-06-06 2019-09-10 上海黑塞智能科技有限公司 A kind of planar architectural component point cloud contour extraction method based on global figure cluster

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810751A (en) * 2014-01-29 2014-05-21 辽宁师范大学 Three-dimensional auricle point cloud shape feature matching method based on IsoRank algorithm
US20170046868A1 (en) * 2015-08-14 2017-02-16 Samsung Electronics Co., Ltd. Method and apparatus for constructing three dimensional model of object
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same
CN105910556A (en) * 2016-04-13 2016-08-31 中国农业大学 Leaf area vertical distribution information extraction method
CN106846387A (en) * 2017-02-09 2017-06-13 中北大学 Point cloud registration method based on neighborhood rotary volume
CN108830888A (en) * 2018-05-24 2018-11-16 中北大学 Thick matching process based on improved multiple dimensioned covariance matrix Feature Descriptor
CN108898128A (en) * 2018-07-11 2018-11-27 宁波艾腾湃智能科技有限公司 A kind of method for anti-counterfeit and equipment matching digital three-dimemsional model by photo
CN109887015A (en) * 2019-03-08 2019-06-14 哈尔滨工程大学 A kind of point cloud autoegistration method based on local surface feature histogram
CN110222642A (en) * 2019-06-06 2019-09-10 上海黑塞智能科技有限公司 A kind of planar architectural component point cloud contour extraction method based on global figure cluster

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WEI GUAN, WENTAO LI,YAN REN: "Wei Guan; WenTao Li; Yan Ren", 《 2018 CHINESE CONTROL AND DECISION CONFERENCE(CCDC)》 *
刘丹丹,冯冬青: "基于表面特征的变电站设备三维点云识别", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
李自胜,丁国富: "点云数据处理与特征识别关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *
陆军,邵红旭,王伟,范哲君,夏桂华: "基于关键点特征匹配的点云配准方法", 《北京理工大学学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446952B (en) * 2020-11-06 2024-01-26 杭州易现先进科技有限公司 Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN112446952A (en) * 2020-11-06 2021-03-05 杭州易现先进科技有限公司 Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN112365592A (en) * 2020-11-10 2021-02-12 大连理工大学 Local environment feature description method based on bidirectional elevation model
CN112766037A (en) * 2020-12-14 2021-05-07 南京工程学院 3D point cloud target identification and positioning method based on maximum likelihood estimation method
CN112766037B (en) * 2020-12-14 2024-04-19 南京工程学院 3D point cloud target identification and positioning method based on maximum likelihood estimation method
CN112669385A (en) * 2020-12-31 2021-04-16 华南理工大学 Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics
CN112669385B (en) * 2020-12-31 2023-06-13 华南理工大学 Industrial robot part identification and pose estimation method based on three-dimensional point cloud features
CN113111741A (en) * 2021-03-27 2021-07-13 西北工业大学 Assembly state identification method based on three-dimensional feature points
CN113111741B (en) * 2021-03-27 2024-05-07 西北工业大学 Assembly state identification method based on three-dimensional feature points
CN113435256B (en) * 2021-06-04 2022-04-26 华中科技大学 Three-dimensional target identification method and system based on geometric consistency constraint
CN113435256A (en) * 2021-06-04 2021-09-24 华中科技大学 Three-dimensional target identification method and system based on geometric consistency constraint
CN113469195A (en) * 2021-06-25 2021-10-01 浙江工业大学 Target identification method based on self-adaptive color fast point feature histogram
CN113469195B (en) * 2021-06-25 2024-02-06 浙江工业大学 Target identification method based on self-adaptive color quick point feature histogram
CN114118181A (en) * 2021-08-26 2022-03-01 西北大学 High-dimensional regression point cloud registration method, system, computer equipment and application
CN114118181B (en) * 2021-08-26 2022-06-21 西北大学 High-dimensional regression point cloud registration method, system, computer equipment and application
CN113807366B (en) * 2021-09-16 2023-08-08 电子科技大学 Point cloud key point extraction method based on deep learning
CN113807366A (en) * 2021-09-16 2021-12-17 电子科技大学 Point cloud key point extraction method based on deep learning
CN116416305A (en) * 2022-09-17 2023-07-11 上海交通大学 Multi-instance pose estimation method based on optimized sampling five-dimensional point pair characteristics
CN116416305B (en) * 2022-09-17 2024-02-13 上海交通大学 Multi-instance pose estimation method based on optimized sampling five-dimensional point pair characteristics

Also Published As

Publication number Publication date
CN111553409B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN111553409B (en) Point cloud identification method based on voxel shape descriptor
CN109887015B (en) Point cloud automatic registration method based on local curved surface feature histogram
Zhong Intrinsic shape signatures: A shape descriptor for 3D object recognition
CN102236794B (en) Recognition and pose determination of 3D objects in 3D scenes
CN104715254B (en) A kind of general object identification method merged based on 2D and 3D SIFT features
US8994723B2 (en) Recognition and pose determination of 3D objects in multimodal scenes
JP5705147B2 (en) Representing 3D objects or objects using descriptors
Tazir et al. CICP: Cluster Iterative Closest Point for sparse–dense point cloud registration
CN111444767B (en) Pedestrian detection and tracking method based on laser radar
CN114972459B (en) Point cloud registration method based on low-dimensional point cloud local feature descriptor
CN108376408A (en) A kind of three dimensional point cloud based on curvature feature quickly weights method for registering
CN114677418B (en) Registration method based on point cloud feature point extraction
CN110930456A (en) Three-dimensional identification and positioning method of sheet metal part based on PCL point cloud library
CN112116553B (en) Passive three-dimensional point cloud model defect identification method based on K-D tree
CN103927511A (en) Image identification method based on difference feature description
CN114200477A (en) Laser three-dimensional imaging radar ground target point cloud data processing method
CN108537805A (en) A kind of target identification method of feature based geometry income
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN106951873B (en) Remote sensing image target identification method
Himstedt et al. Geometry matters: Place recognition in 2D range scans using Geometrical Surface Relations
Liu et al. Robust 3-d object recognition via view-specific constraint
Lu et al. Matching algorithm of 3D point clouds based on multiscale features and covariance matrix descriptors
CN115713627A (en) Plane feature extraction method based on normal vector segmentation and region growing
CN111626096B (en) Three-dimensional point cloud data interest point extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant