CN111553409A - Point cloud identification method based on voxel shape descriptor - Google Patents
Point cloud identification method based on voxel shape descriptor Download PDFInfo
- Publication number
- CN111553409A CN111553409A CN202010340995.1A CN202010340995A CN111553409A CN 111553409 A CN111553409 A CN 111553409A CN 202010340995 A CN202010340995 A CN 202010340995A CN 111553409 A CN111553409 A CN 111553409A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- normal
- points
- neighborhood
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013507 mapping Methods 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 67
- 230000005484 gravity Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000001154 acute effect Effects 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 8
- 238000012847 principal component analysis method Methods 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 230000001419 dependent effect Effects 0.000 claims 1
- 239000000284 extract Substances 0.000 abstract description 5
- 230000001629 suppression Effects 0.000 abstract description 3
- 238000005259 measurement Methods 0.000 abstract description 2
- 238000000513 principal component analysis Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- RBZXVDSILZXPDM-UHFFFAOYSA-N 1-(2,5-dimethoxy-3,4-dimethylphenyl)propan-2-amine Chemical compound COC1=CC(CC(C)N)=C(OC)C(C)=C1C RBZXVDSILZXPDM-UHFFFAOYSA-N 0.000 description 4
- 102000016904 Armadillo Domain Proteins Human genes 0.000 description 2
- 108010014223 Armadillo Domain Proteins Proteins 0.000 description 2
- 241000289632 Dasypodidae Species 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000282461 Canis lupus Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 241000287828 Gallus gallus Species 0.000 description 1
- 241000282575 Gorilla Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computer vision and three-dimensional measurement, and particularly relates to a point cloud identification method based on a voxel shape descriptor. The normal estimation method based on the improved PCA reduces the influence degree of noise on the normal, effectively extracts the local neighborhood information of points, performs non-maximum suppression on the variance of the local normal, and the extracted key points have the characteristics of high identification degree and low overlap. The method takes the variance of the neighborhood normal of the key points as a significant value, and calculates the intersection of the significant values of the key points of the source point cloud and the target point cloud to preliminarily extract the intersection of the key points and accelerate the feature matching. The invention provides a voxel shape descriptor, which is suitable for describing a large-range neighborhood of dense point cloud by mapping neighborhood points of key points to a local coordinate system of the key points, counting the three-dimensional spatial distribution of the neighborhood points and quickly calculating the feature descriptor.
Description
Technical Field
The invention belongs to the technical field of computer vision and three-dimensional measurement, and particularly relates to a point cloud identification method based on a voxel shape descriptor.
Background
Target recognition in a scene is a basic research in the field of computer vision, and has important application values in numerous fields, such as intelligent monitoring, automatic assembly, mobile operation, robots, medical analysis and the like. Compared with a traditional two-dimensional image, the three-dimensional point cloud can provide more geometric information and is not subjected to rotation and illumination. Thus, the three-dimensional point cloud has potential advantages in dealing with the object recognition problem. In addition, with the rapid development of the three-dimensional scanning technology, the acquisition of the point cloud data becomes convenient and fast. This makes three-dimensional object recognition represent its irreplaceable value in the field of computer vision. ) The object recognition method proposed at present often needs to extract local features of an object from a large amount of point cloud data. Due to the fact that point cloud scene data have the characteristics of large range, large scale and high volume, each object comprises a large number of local features, and each local feature corresponds to a high-dimensional description vector, so that the problems of large calculation amount, low calculation efficiency and the like are caused. On the other hand, actually measured three-dimensional point cloud scene data is susceptible to noise, and has the defects of uneven density distribution and incapability of ensuring the accuracy of calculation of differential geometric features such as normal vectors, curvatures and the like. Aiming at the problems, the invention provides a point cloud identification method based on a voxel shape descriptor, and the influence degree of noise on a normal is reduced through a point cloud normal calculation method of multi-scale neighborhood and Euclidean distance weighting. In addition, by selecting key points and improving the construction of descriptors in various aspects, the identification and calculation speed and accuracy are effectively improved.
Disclosure of Invention
The invention aims to provide a point cloud identification method based on a voxel shape descriptor, which reduces the influence degree of noise on a normal line and improves the identification calculation speed and accuracy.
The purpose of the invention is realized by the following technical scheme: the method comprises the following steps:
step 1: carrying out normal estimation on points in the point cloud by a Euclidean distance weighted principal component analysis method;
step 2: correcting the obtained normal line by using a normal direction ambiguity judging algorithm generated based on the region;
step 2.1: calculating a minimum bounding box of the point cloud to be detected, and determining coordinate ranges of the point cloud in x, y and z dimensions;
step 2.2: dividing the minimum bounding box into a plurality of small cubes with the same size, wherein the size of each small cube is set according to the resolution ratio of the point cloud; establishing index numbers for the microcubes according to the spatial sequence of the microcubes, namely the sequence of x, y and z from small to large, and marking the microcubes without points inside as invalid;
step 2.3: calculating effective cube viCenter of mass ofAnd the centroid of the entire point cloudThe linear equation formed;
whereinAre respectively the center of massIs determined by the coordinate of (a) in the space,are respectively the center of massThe coordinates of (a);
step 2.4: setting an error threshold e according to the point cloud resolution, and constructing an intersection point set J;
if small cube viIf the point (x, y, z) in (1) satisfies the following formula, the small cube v is judged to beiThe point (x, y, z) in (1) intersects the straight line, and the intersection is designated aspjAll the intersection points satisfying the following formula form a set J;
step 2.5: when all the points p in the intersection set JjWith the point cloud centroidIs less than the centroidAnd the center of massAt a distance of (d), the small cube viMarked as "known normal direction" with centroidAs viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is obtuse, the normal direction is kept, and if the angle is acute, the normal is reversed;
when all the points p in the intersection set JjWith the point cloud centroidIs greater than the center of massAnd the center of massAt a distance of (d), the small cube viMarked as "known normal direction" with centroidAs viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is acute, the normal direction is kept, and if the angle is obtuse, the normal is reversed;
when the point in the intersection point set J is the point cloud centroidIs already less than the centroidAnd the center of massHas a distance greater than the center of massAnd the center of massIn the case of distance, the small cube viLabeled "unknown normal direction"; determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viThe normal lines of all the points are consistent with the normal lines of the adjacent small cubes, namely the included angle between the two is kept to be an acute angle, and if the included angle is an obtuse angle, the normal lines need to be reversed; if the adjacent small cube is marked as the unknown normal direction, continuously searching the adjacent small cube of the adjacent small cube until the adjacent small cube marked as the known normal direction is found, and correcting all the small cubes marked as the unknown normal direction found before according to the normal direction of the small cubes;
and step 3: calculating a final normal by using a multi-scale normal fusion algorithm;
and 4, step 4: selecting pre-key points through a curvature threshold, and screening out key points of model point cloud and target point cloud by taking a neighborhood normal variance as a significant value;
step 4.1: setting a curvature threshold value, and enabling the curvature to be larger than the curvature threshold valueAs the pre-key point peDiscarding points with curvature smaller than a curvature threshold as plane points;
step 4.2: calculating a significant value l of each pre-key point;
wherein,is a pre-key point peA normal to the neighborhood point; m is peNeighborhood points;is the normal mean of all points in the neighborhood;
step 4.3: judging whether the significant value of the pre-key point is the maximum value in the neighborhood, if not, changing the point into a non-pre-key point;
step 4.4: sequencing the significant values from small to large to obtain the minimum significant value of the pre-key points in the model point cloudMaximum value of significance of pre-key points in model point cloudSignificance minimum value of pre-key points in scene point cloudSignificance maximum value of pre-key point in scene point cloud
Step 4.5: taking the intersection of the scene point cloud and the model point cloud significant value intervalTaking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key pointsRespectively usingAndrepresents;
and 5: calculating the three-dimensional spatial distribution of the neighborhood points by mapping the neighborhood points of the key points to the local coordinate system of the key points, and constructing a voxel shape descriptor;
step 6: and searching the corresponding relation between the model point cloud and the scene point cloud through distance weighted Hough voting to complete point cloud target identification.
The present invention may further comprise:
the method for finding the corresponding relationship between the model point cloud and the scene point cloud through distance weighted hough voting in the step 6 specifically comprises the following steps:
step 6.1: searching scene point cloud key pointsNearest point based on Euclidean distance of feature descriptors in model point cloudSetting a distance threshold value, if the Euclidean distance between the two characteristic descriptors is less than the distance threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the distance threshold valueAs a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated;
step 6.2: continuously searching key pointsPossible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it willAs a group of initial corresponding relations, continuously calculating third neighbors ordered according to Euclidean distances of the feature descriptors, repeatedly iterating, and finding all corresponding relations meeting a distance threshold value as the initial corresponding relations;
step 6.3: calculating each key point in model point cloudAnd center of gravity c of modelmGlobal reference vector of
Step 6.4: the obtained global reference vectorConverting the local coordinate system of the key point to obtain a model local reference vector
Wherein,is a rotation transformation matrix; are respectively key pointsAnd unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.5: each key point in the model point cloudTo corresponding points in the scene point cloudForming a plurality of corresponding local reference vectors; because the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal:
step 6.6: converting the local reference vector obtained in the scene into a global coordinate system of the scene point cloud to obtain a global vector
Wherein,in order to be a matrix of rotations, are respectively key pointsAnd unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.7: dividing the scene point cloud into voxels with equal size, if the global vectorIf the end point of the point falls in a certain voxel, adding 1 to the voxel ticket value; according to the distance, the increased voting value core is calculated according to the following formula:
step 6.8: searching the voxel with the most votes, keeping the corresponding relation related to the voxel as a final corresponding relation, and eliminating other corresponding relations;
step 6.9: setting a ratio threshold, identifying model point clouds in the scene point clouds when the ratio of the final corresponding relation to the initial corresponding relation is larger than the ratio threshold, solving a conversion relation through the final corresponding relation by using a random sampling consistency registration algorithm, converting the model point clouds into the scene point clouds, and marking the point closest to the converted point in the scene point clouds as an identified result.
The method for constructing the voxel shape descriptor in the step 5 specifically comprises the following steps:
step 5.1: after the feature vectors of the key points are subjected to unit orthogonalization, the key points are used as coordinate origin points, and the maximum feature value lambda is used1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3Establishing a local coordinate system of the key points for the z axis;
step 5.2: under a local coordinate system of the key points, a cube with the side length of 25 × s is established by taking the key points as centers;
step 5.3: the length, width and height of the cube are averagely divided into 5 parts to form 125 small cube subspaces;
step 5.4: counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the sequence of xyz; and traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
The method for estimating the normal of the point in the point cloud by the Euclidean distance weighted principal component analysis method in the step 1 specifically comprises the following steps: the covariance matrix is improved, and a distance weight coefficient w is added, as shown in the following formula:
wherein the distance weight coefficientr is the neighborhood radius;is piA neighborhood point of (d); m is the number of neighborhood points;represents a point piThe center of gravity of the neighborhood;is a pointAnd point piThe distance of (d); to E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
The method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used. .
The invention has the beneficial effects that:
the normal estimation method based on the improved PCA reduces the influence degree of noise on the normal, effectively extracts the local neighborhood information of points, performs non-maximum suppression on the variance of the local normal, and the extracted key points have the characteristics of high identification degree and low overlap. The method takes the variance of the neighborhood normal of the key points as a significant value, and calculates the intersection of the significant values of the key points of the source point cloud and the target point cloud to preliminarily extract the intersection of the key points and accelerate the feature matching. The invention provides a voxel shape descriptor, which is suitable for describing a large-range neighborhood of dense point cloud by mapping neighborhood points of key points to a local coordinate system of the key points, counting the three-dimensional spatial distribution of the neighborhood points and quickly calculating the feature descriptor. The nearest neighbor of the feature descriptors of the key points is used for selecting an initial matching relation, and then the model point cloud is searched in the scene point cloud through improved Hough voting for identification, so that the identification rate is increased, and the identification accuracy is improved.
Drawings
FIG. 1 is a diagram of a point cloud identification model according to an embodiment of the invention.
FIG. 2 is a graph comparing results before and after normal redirection in an embodiment of the present invention.
FIG. 3 is a distribution diagram of key points under Gaussian noise according to an embodiment of the present invention.
Fig. 4 is a diagram of the initial correspondence and the final correspondence in the embodiment of the present invention.
FIG. 5 is a diagram illustrating a multi-target identification mapping according to an embodiment of the present invention.
Fig. 6 is a pose estimation result diagram in the embodiment of the present invention.
FIG. 7 is a table of experimental data in an example of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention aims to disclose a point cloud identification method based on a voxel shape descriptor. First, a point cloud normal is calculated by a Principal Component Analysis (PCA) method of euclidean distance weighting, the normal obtained is corrected using a normal direction ambiguity decision algorithm based on region generation, and a final normal is calculated using a multi-scale normal fusion algorithm. Then, pre-key points are selected through a curvature threshold value, the neighborhood normal variance is used as a significant value, and key points of the model point cloud and the target point cloud are screened out. Then, the three-dimensional space distribution of the neighborhood points is counted by mapping the neighborhood points of the key points to the local coordinate system of the key points, and the voxel shape descriptor is constructed. And finally, searching the corresponding relation between the model point cloud and the scene point cloud through distance weighted Hough voting, and further completing point cloud target identification. The recognition method provided by the invention effectively reduces the influence degree of noise on the normal line, and improves the recognition calculation speed and accuracy. In addition, good recognition performance can still be kept for complex scenes. According to the method, the influence degree of noise on the normal is reduced by a point cloud normal calculation method with multi-scale neighborhood and Euclidean distance weighting. In addition, by selecting key points and improving the construction of descriptors in various aspects, the identification and calculation speed and accuracy are effectively improved.
The method comprises the following steps:
step 1: and (6) estimating a normal line.
Step 1.1: firstly, the covariance matrix is improved, and a distance weight coefficient w is added, as shown in formula (1):
whereinr is the radius of the neighborhood region,is piM is the number of neighborhood points,represents a point piThe center of gravity of the neighborhood is,is a pointAnd point piThe distance of (c). To E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
Step 1.2: in order to solve the ambiguity problem of normal estimation (namely the direction of the normal to be calculated is possibly opposite to the direction of the real normal and the normal direction of the whole point cloud cannot be consistent), a normal direction ambiguity judgment algorithm based on region generation is provided to redirect the normal.
Firstly, calculating a minimum bounding box for point cloud to be detected, and determining a coordinate range x of the point cloud in three dimensions of x, y and zmin、xmax、ymin、ymax、zmin、zmaxAnd dividing the minimum bounding box into a plurality of small cubes with the same size, wherein the size of each small cube is set according to the resolution ratio of the point cloud. Index numbers are established for the microcubes according to the spatial order of the microcubes (the order of small to large x, y and z), and the microcubes which do not contain points inside are marked as invalid.
Then, for the point to be calculated, the small cube v containing the pointiThe centroid of the inner point cloud isCentroid of the whole point cloud Andthe equation of the straight line is shown in formula (1):
whereinAre respectively the center of massIs determined by the coordinate of (a) in the space,are respectively the center of massThe coordinates of (a). Respectively by k1、k2、k3Three parts of a straight line equation are expressed as shown in formula (2), formula (3) and formula (4):
a small cube viThe midpoint coordinate is taken into formula (2), formula (3) and formula (4), and k is obtained1、k2、k3Setting an error threshold e (set according to the resolution of the point cloud), if k1、k2、k3If equation (5) is satisfied, the small cube v is considerediThis point (x, y, z) in (a) intersects the straight line, and the intersection point is denoted as pjAll the intersections satisfying the formula (5) form a set J.
Finally, the normal direction is determined in three cases. When all the points p in the intersection set JjWith the point cloud centroidIs less than the centroidAnd the center of massAnd (c) and the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of massAs viewpoint, the viewpoint is calculated to the small cube viThe angle between the connecting line of each point and the normal of the point is kept in the normal direction if the angle is obtuse, and the normal is reversed if the angle is acute. When all the points p in the intersection set JjWith the point cloud centroidIs greater than the center of massAnd the center of massAt a distance of (d), the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of massAs viewpoint, the viewpoint is calculated to the small cube viThe angle between the connecting line of each point and the normal of the point is kept in the normal direction if the angle is acute, and the normal is reversed if the angle is obtuse. When the point in the intersection point set J is the point cloud centroidIs already less than the centroidAnd the center of massHas a distance greater than the center of massAnd the center of massIn the case of distance, the small cube viThe label is "unknown normal direction" and the normal direction is determined as follows: determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viThe normal of all points should be consistent with the normal of the adjacent small cube, i.e. the included angle between the two is kept as an acute angle, and if the included angle is an obtuse angle, the normal needs to be reversed. If the adjacent small cube is marked as 'unknown normal direction', the search for the adjacent small cube of the adjacent small cube is continued until the adjacent small cube marked as 'known normal direction' is found, and all the previously found small cubes marked as 'unknown normal direction' are corrected according to the normal direction thereof.
Step 1.3: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3The weighted average of normals calculated based on the 3 neighborhood radii is given by equation (6) with s at 7 × s as the point cloud resolution, and the average n is taken as the final normal.
Wherein n is1、n2、n3Is a radius r1、r2、r3The normal determined in step 1.1 and step 1.2.
Step 2: and (5) searching key points.
Step 2.1: setting a threshold value, and taking a point with curvature larger than the threshold value as a pre-key point pePoints with a curvature smaller than the threshold are discarded as plane points.
Step 2.2: and (3) taking the normal variance in the point neighborhood as a significant value l of the pre-key point, as shown in a formula (7). The saliency value of the non-pre-keypoints is set to 0, and the saliency value of each pre-keypoint is calculated.
WhereinIs a pre-key point peNormal to the neighborhood point, m being peThe number of neighborhood points,is the normal average of all points in the neighborhood.
Step 2.3: and judging whether the significant value of the pre-key point is the maximum value in the neighborhood, if not, discarding the point, namely changing the point into a non-pre-key point.
Step 2.4: the saliency values are sorted from small to large,is the minimum value of the significance values of the pre-key points of the model point cloud,and predicting the maximum value of the significant value of the key point for the model point cloud. Calculating the significance value of the scene point cloud pre-key point,is the minimum value of the significance values of the pre-key points of the scene point cloud,and predicting the maximum value of the significance value of the scene point cloud. Taking the intersection of the scene point cloud and the model point cloud significant value intervalTaking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key points, and respectively using the key pointsAndand (4) showing.
And step 3: and calculating a characteristic descriptor.
Step 3.1: and establishing a local coordinate system of the key points by taking the key points as an origin. Performing unit orthogonalization on the feature vectors of the key points, wherein the maximum feature value lambda is1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3For the z-axis, the keypoints are taken as the origin of coordinates.
Step 3.2: and under a local coordinate system, a cube with the side length of 25 × s is established by taking the key point as the center, and s is the resolution.
Step 3.3: the length, width and height of the cube are averagely divided into 5 parts to form 125 small cube subspaces
Step 3.4: and counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the sequence of xyz. And traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
And 4, step 4: and (4) point cloud identification.
Step 4.1: searching scene point cloud key pointsNearest point based on Euclidean distance of feature descriptors in model point cloudSetting a threshold value, if the Euclidean distance between the two characteristic descriptors is less than the threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the threshold valueAs a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated. Continuously searching key pointsPossible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it willAnd as a group of initial corresponding relations, continuously calculating third neighbors ordered according to the Euclidean distance of the feature descriptors, and repeatedly iterating to find all corresponding relations meeting the threshold value as the initial corresponding relations.
Step 4.2: first, for each key point in the modelCalculate its and model center of gravity cmGlobal reference vector ofAs shown in formula (8):
the obtained global reference vectorConverting the local coordinate system of the key point into a local coordinate system of the key point, and calculating by using the formula (9) to obtain a model local reference vector
are respectively key pointsAnd unit feature vectors under a local coordinate system constructed by the points in the neighborhood.
Then, according to the one-to-many corresponding relation obtained in the step 4.1, each key point in the point cloud model is processedTo corresponding points in the scene point cloudA plurality of corresponding local reference vectors are formed. Because the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal:
converting the local reference vector obtained in the scene into a global coordinate system of the scene point cloud by using the formula (12) to obtain a global vector
are respectively key pointsAnd unit feature vectors under a local coordinate system constructed by the points in the neighborhood.
Step 4.3: dividing the scene point cloud into voxels with equal size, if the global vectorFalls within a voxel, the voxel ticket value is incremented by 1. Because the key point farther from the center of gravity is influenced by noise more greatly, and the longer the vector from the key point to the center of gravity is, the larger the error of the position of the center of gravity is, on the basis of the traditional Hough voting, the Hough voting based on distance weighting is proposed: the increased voting value core is calculated according to the distance according to the formula (14):
the number of votes falling into each voxel is counted.
Step 4.4: and searching the voxel with the most votes, keeping the corresponding relation related to the voxel as a final corresponding relation, and rejecting other corresponding relations. And setting a threshold, and identifying that the model point cloud exists in the scene point cloud when the ratio of the final corresponding relation to the initial corresponding relation is greater than the threshold. And then, solving a conversion relation through a final corresponding relation by using a random sampling consistency (RANSAC) registration algorithm, converting the model point cloud into the scene point cloud, and marking a point in the scene point cloud which is closest to the converted point as an identified result.
The method has the advantages that the influence degree of noise on the normal is reduced based on the normal estimation method of the improved PCA. The key point searching algorithm based on the neighborhood normal variance is provided, the local neighborhood information of the point is effectively extracted, the local normal variance is subjected to non-maximum suppression, and the extracted key points have the characteristics of high identification degree and low overlapping. Taking the variance of the neighborhood normal of the key points as a significant value, and solving the intersection of the significant values of the key points of the source point cloud and the target point cloud to preliminarily extract the intersection of the key points and accelerate the feature matching. The voxel shape descriptor is provided, the three-dimensional space distribution of the neighborhood points is counted by mapping the neighborhood points of the key points to the local coordinate system of the key points, the feature descriptor is rapidly calculated, and the method is suitable for describing the large-range neighborhood of the dense point cloud. The nearest neighbor of the feature descriptors of the key points is used for selecting an initial matching relation, and then the model point cloud is searched in the scene point cloud through improved Hough voting for identification, so that the identification rate is increased, and the identification accuracy is improved.
Example 1:
the point cloud model data used in the present invention are shown in fig. 1, and include 16 types, such as armadillo, bunny, cat, centaur, cheff, chicken, david, dog, dragon, face, ganesha, gorilla, horse, para, trex, wolf, and the like, and also include scene point clouds of the respective models, and scene point cloud combinations.
FIG. 2 shows comparison results before and after normal reorientation, where (a) and (c) are non-oriented normals and (b) and (d) are oriented normals.
FIG. 3 is a distribution of key points under Gaussian noise, where (a) is in the case of no noise; (b) under the condition of 0.1 time of resolution; (c) at 0.2 times resolution; (d) at 0.3 times resolution; (e) at 0.4 times the resolution.
FIG. 4 shows an initial correspondence and a final correspondence, wherein (a) is a cheff initial correspondence; (b) is cheff corresponding relation; (c) is the initial corresponding relation of ganesha; (d) is a ganesha corresponding relationship.
FIG. 5 is a correspondence of multi-target identification, wherein (a) is a bunny correspondence; (b) is the corresponding relation of armadillo; (c) is a ganesha corresponding relationship.
The method is implemented according to the following steps:
step 1: and (3) normal vector estimation:
step 1.1: and (3) carrying out normal estimation on points in the model point cloud and the scene point cloud by using a Principal Component Analysis (PCA).
Step 1.2: representing the point cloud as several small cubes v of equal sizeiThe size of the small cube is according toAnd setting the resolution of the point cloud. The point clouds inside the small cube are ordered so that the index neighbors are also adjacent in space. And establishing a kd-tree for the point cloud in the small cube, randomly selecting one point as a starting point, searching the closest point of the starting point in the kd-tree as a 2 nd point, searching the closest point of the current point from the rest points by using the 2 nd point as the current point as a 3 rd point, and so on until the point cloud in the small cube is sorted. Calculating a small cube v of an interior containing pointiCentroid of inner point cloudAnd the centroid of the entire point cloudAnd findAnda linear equation is formed. Judgment small cube viThe intersection point of this point (x, y, z) and the straight line is denoted as pjThe set of intersections is J.
When all the points p in the intersection set JjWith the point cloud centroidIs less than the centroidAnd the center of massAnd (c) and the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of massAs viewpoint, the viewpoint is calculated to the small cube viOf the line connecting each point with the normal of that pointThe angle is such that the normal direction is maintained at an obtuse angle and the normal direction is reversed at an acute angle. When all the points p in the intersection set JjWith the point cloud centroidIs greater than the center of massAnd the center of massAt a distance of (d), the small cube viThe label is "known normal direction" and the normal direction is determined as follows: by the center of massAs viewpoint, the viewpoint is calculated to the small cube viThe angle between the connecting line of each point and the normal of the point is kept in the normal direction if the angle is acute, and the normal is reversed if the angle is obtuse. When the point in the intersection point set J is the point cloud centroidIs already less than the centroidAnd the center of massHas a distance greater than the center of massAnd the center of massIn the case of distance, the small cube viThe label is "unknown normal direction" and the normal direction is determined as follows: determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viNormal to all pointsShould be consistent with the normal of the adjacent small cube, i.e. the included angle between the two is kept as an acute angle, and if it is an obtuse angle, the normal is reversed. If the adjacent small cube is marked as 'unknown normal direction', the search for the adjacent small cube of the adjacent small cube is continued until the adjacent small cube marked as 'known normal direction' is found, and all the previously found small cubes marked as 'unknown normal direction' are corrected according to the normal direction thereof.
Step 1.3: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3And 7s is the resolution of the point cloud, the normal calculated based on the 3 neighborhood radii is weighted to obtain the average value, and the average value n is used as the final normal.
Fig. 2 shows the comparison results before and after normal redirection, where the neighborhood radius and distance threshold are 5 times the resolution, and the voxel size is 10 times the resolution. Grey is a point and black is the normal direction, the normals of the same surface are nearly parallel. The undirected normal estimation generates ambiguity at the normal estimation at the inflection point, and the oriented normal is more in line with the normal direction of the whole point cloud.
Step 2: key point search
Firstly, setting a threshold value, and taking a point with curvature larger than the threshold value as a pre-key point pe. And then, taking the normal variance in the point neighborhood as a significant value l of the pre-key point, as shown in formula (8). The saliency value of the non-pre-keypoints is set to 0, and the saliency value of each pre-keypoint is calculated. Then, whether the significant value of the pre-key point is the maximum value in the neighborhood is judged, if not, the point is discarded, and the point is changed into a non-pre-key point. The saliency values are finally sorted from small to large,is the minimum value of the significance values of the pre-key points of the model point cloud,and predicting the maximum value of the significant value of the key point for the model point cloud. Calculating the significance value of the scene point cloud pre-key point,is the minimum value of the significance values of the pre-key points of the scene point cloud,and predicting the maximum value of the significance value of the scene point cloud. Taking the intersection of the scene point cloud and the model point cloud significant value intervalTaking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key points, and respectively using the key pointsAndand (4) showing.
Fig. 3 shows the distribution of the key points after gaussian noise with different multiplying powers is added, and under the influence of noise, most of the key points are still located at similar positions, so that the anti-noise capability of the recognition algorithm is enhanced.
And step 3: and calculating a characteristic descriptor.
Firstly, taking a key point as an origin, and performing unit orthogonalization on a feature vector of the key point, wherein the maximum feature value lambda is1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3And taking the key point as a coordinate origin for the z axis, and establishing a local coordinate system of the key point. And then, under a local coordinate system, a cube with the side length of 25 × s is established by taking the key point as the center, and s is the resolution. The length, width and height of the cube are then divided into 5 parts on average, forming 125 small cube subspaces. And finally counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the xyz sequence. And traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
And 4, step 4: point cloud identification
Step 4.1: firstly, the methodSearching scene point cloud key pointsNearest point based on Euclidean distance of feature descriptors in model point cloudSetting a threshold value, if the Euclidean distance between the two characteristic descriptors is less than the threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the threshold valueAs a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated. Continuously searching key pointsPossible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it willAnd as a group of initial corresponding relations, continuously calculating third neighbors ordered according to the Euclidean distance of the feature descriptors, and repeatedly iterating to find all corresponding relations meeting the threshold value as the initial corresponding relations.
Then, for each key point in the model, a global vector of the key point and the center of gravity of the model is calculatedAnd converting the global vector into a local coordinate system of the key point. Due to the fact that the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal. According to the initial corresponding relation, each key point in the model point cloudLocal vector transformation to corresponding points in the scene point cloudThe above. Converting the local vector obtained in the scene into a global coordinate system of the scene point cloud to obtain a global vector
Finally, the scene point cloud is divided into voxels with equal size, if the global vector isFalls within a voxel, the voxel ticket value is incremented by 1. Because the key point farther from the center of gravity is influenced by noise more greatly, and the longer the vector from the key point to the center of gravity is, the larger the error of the position of the center of gravity is, on the basis of the traditional Hough voting, the Hough voting based on distance weighting is proposed: according to the distance, the added voting value core is:
the number of votes falling into each voxel is counted.
Step 4.2: finding the voxel with the most votes, keeping the corresponding relation related to the voxel as the final corresponding relation, and eliminating other corresponding relations. And setting a threshold, and identifying that the model point cloud exists in the scene point cloud when the ratio of the final corresponding relation to the initial corresponding relation is greater than the threshold. And then, solving a change relation between the model point cloud and the scene point cloud through a final corresponding relation by using a random sample consensus (RANSAC) registration algorithm, converting the model point cloud into the scene point cloud, and marking a point in the scene point cloud, which is closest to the converted point, as an identified result.
In fig. 4, (a) and (c) are initial corresponding relations, and (b) and (d) are correct corresponding relations after removing wrong corresponding relations, a large number of effective corresponding relations can be found by using a voxel shape descriptor and improving a hough voting matching algorithm, so that the robustness of the recognition algorithm is improved. Wherein the distance threshold of the initial correspondence is 0.01, and the voting voxels are 10 times the resolution. Fig. 5 shows the matching relationship of the recognition models in the complex scene point cloud, and the accuracy of the correspondence calculated by the improved hough voting is high, so that all objects in the scene can be recognized at the same time. Fig. 6 shows the improved pose estimation result of the hough voting based on the voxel shape descriptor, which has high precision and can accurately estimate the poses of a plurality of targets in a scene. Fig. 7 is experimental data of a point cloud identification algorithm based on a voxel shape descriptor, and time consumption and main parameters of each stage in the identification process are recorded in detail. The single target recognition time is about 1s, and the three targets recognition time is about 1.7 s.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A point cloud identification method based on voxel shape descriptors is characterized by comprising the following steps:
step 1: carrying out normal estimation on points in the point cloud by a Euclidean distance weighted principal component analysis method;
step 2: correcting the obtained normal line by using a normal direction ambiguity judging algorithm generated based on the region;
step 2.1: calculating a minimum bounding box of the point cloud to be detected, and determining coordinate ranges of the point cloud in x, y and z dimensions;
step 2.2: dividing the minimum bounding box into a plurality of small cubes with the same size, wherein the size of each small cube is set according to the resolution ratio of the point cloud; establishing index numbers for the microcubes according to the spatial sequence of the microcubes, namely the sequence of x, y and z from small to large, and marking the microcubes without points inside as invalid;
step 2.3: calculating effective cube viCenter of mass ofAnd the centroid of the entire point cloudThe linear equation formed;
whereinAre respectively the center of massIs determined by the coordinate of (a) in the space,are respectively the center of massThe coordinates of (a);
step 2.4: setting an error threshold e according to the point cloud resolution, and constructing an intersection point set J;
if small cube viIf the point (x, y, z) in (1) satisfies the following formula, the small cube v is judged to beiThe point (x, y, z) in (1) intersects the straight line, and the intersection is denoted as pjAll the intersection points satisfying the following formula form a set J;
step 2.5: when all the points p in the intersection set JjWith the point cloud centroidIs less than the centroidAnd the center of massAt a distance of (d), the small cube viMarked as "known normal direction" with centroidAs viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is obtuse, the normal direction is kept, and if the angle is acute, the normal is reversed;
when all the points p in the intersection set JjWith the point cloud centroidIs greater than the center of massAnd the center of massAt a distance of (d), the small cube viMarked as "known normal direction" with centroidAs viewpoint, the viewpoint is calculated to the small cube viIf the angle between the connecting line of each point and the normal of the point is acute, the normal direction is kept, and if the angle is obtuse, the normal is reversed;
when the point in the intersection point set J is the point cloud centroidIs already less than the centroidAnd the center of massHas a distance greater than the center of massAnd the center of massIn the case of distance, the small cube viLabeled "unknown normal direction"; determine if its neighboring cube is marked as "known normal direction", and if it is already marked as "known normal direction", cube viThe normal lines of all the points are consistent with the normal lines of the adjacent small cubes, namely the included angle between the two is kept to be an acute angle, and if the included angle is an obtuse angle, the normal lines need to be reversed; if the adjacent small cube is marked as the unknown normal direction, continuously searching the adjacent small cube of the adjacent small cube until the adjacent small cube marked as the known normal direction is found, and correcting all the small cubes marked as the unknown normal direction found before according to the normal direction of the small cubes;
and step 3: calculating a final normal by using a multi-scale normal fusion algorithm;
and 4, step 4: selecting pre-key points through a curvature threshold, and screening out key points of model point cloud and target point cloud by taking a neighborhood normal variance as a significant value;
step 4.1: setting a curvature threshold value, and taking a point with the curvature larger than the curvature threshold value as a pre-key point peDiscarding points with curvature smaller than a curvature threshold as plane points;
step 4.2: calculating a significant value l of each pre-key point;
wherein,is a pre-key point peA normal to the neighborhood point; m is peNeighborhood points;is the normal mean of all points in the neighborhood;
step 4.3: judging whether the significant value of the pre-key point is the maximum value in the neighborhood, if not, changing the point into a non-pre-key point;
step 4.4: sequencing the significant values from small to large to obtain the minimum significant value of the pre-key points in the model point cloudMaximum value of significance of pre-key points in model point cloudSignificance minimum value of pre-key points in scene point cloudSignificance maximum value of pre-key point in scene point cloud
Step 4.5: taking the intersection of the scene point cloud and the model point cloud significant value intervalTaking the model point cloud with the significant value in the interval and the pre-key point of the scene point cloud as key points, and respectively using the key pointsAndrepresents;
and 5: calculating the three-dimensional spatial distribution of the neighborhood points by mapping the neighborhood points of the key points to the local coordinate system of the key points, and constructing a voxel shape descriptor;
step 6: and searching the corresponding relation between the model point cloud and the scene point cloud through distance weighted Hough voting to complete point cloud target identification.
2. The method of claim 1, wherein the method comprises: the method for finding the corresponding relationship between the model point cloud and the scene point cloud through distance weighted hough voting in the step 6 specifically comprises the following steps:
step 6.1: searching scene point cloud key pointsNearest point based on Euclidean distance of feature descriptors in model point cloudSetting a distance threshold value, if the Euclidean distance between the two characteristic descriptors is less than the distance threshold value, determining that the Euclidean distance between the two characteristic descriptors is less than the distance threshold valueAs a set of valid correspondences; otherwise, the key point has no corresponding point, and the next key point of the scene point cloud is calculated;
step 6.2: continuously searching key pointsPossible corresponding points of (a): if the Euclidean distance of the secondary adjacent feature descriptor is larger than the distance threshold value or is one time of the Euclidean distance of the nearest adjacent feature descriptor, stopping the calculation; otherwise, it willAs a group of initial corresponding relations, continuously calculating third neighbors ordered according to Euclidean distances of the feature descriptors, repeatedly iterating, and finding all corresponding relations meeting a distance threshold value as the initial corresponding relations;
step 6.3: calculating each key point in model point cloudAnd center of gravity c of modelmGlobal reference vector of
Step 6.4: the obtained global reference vectorConverting the local coordinate system of the key point to obtain a model local reference vector
Wherein,is a rotation transformation matrix; are respectively key pointsAnd unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.5: each key point in the model point cloudOf the local ginsengConversion of test vectors to corresponding points in a scene point cloudForming a plurality of corresponding local reference vectors; because the local coordinate system has rotation invariance, the local vectors of key points in the scene point cloud and the model point cloud are equal:
step 6.6: converting the local reference vector obtained in the scene into a global coordinate system of the scene point cloud to obtain a global vector
Wherein,in order to be a matrix of rotations, are respectively key pointsAnd unit feature vectors under a local coordinate system constructed by the points in the neighborhood;
step 6.7: dividing the scene point cloud into voxels with equal size, if the global vectorIf the end point of the point falls in a certain voxel, adding 1 to the voxel ticket value; distance-dependent distanceThe added vote value core is calculated as follows:
step 6.8: searching the voxel with the most votes, keeping the corresponding relation related to the voxel as a final corresponding relation, and eliminating other corresponding relations;
step 6.9: setting a ratio threshold, identifying model point clouds in the scene point clouds when the ratio of the final corresponding relation to the initial corresponding relation is larger than the ratio threshold, solving a conversion relation through the final corresponding relation by using a random sampling consistency registration algorithm, converting the model point clouds into the scene point clouds, and marking the point closest to the converted point in the scene point clouds as an identified result.
3. A method of voxel shape descriptor based point cloud identification as claimed in claim 1 or 2, wherein: the method for constructing the voxel shape descriptor in the step 5 specifically comprises the following steps:
step 5.1: after the feature vectors of the key points are subjected to unit orthogonalization, the key points are used as coordinate origin points, and the maximum feature value lambda is used1Corresponding feature vector v1Is the x-axis, the second largest eigenvalue λ2Corresponding feature vector v2Is the y-axis, minimum eigenvalue λ3Corresponding feature vector v3Establishing a local coordinate system of the key points for the z axis;
step 5.2: under a local coordinate system of the key points, a cube with the side length of 25 × s is established by taking the key points as centers;
step 5.3: the length, width and height of the cube are averagely divided into 5 parts to form 125 small cube subspaces;
step 5.4: counting the number of points of the point cloud in each small cube, and stretching the point cloud into 125-dimensional column vectors according to the sequence of xyz; and traversing all key points of the source point cloud and the target point cloud to obtain a feature descriptor set of the source point cloud and the target point cloud.
4. A method of voxel shape descriptor based point cloud identification as claimed in claim 1 or 2, wherein: the method for estimating the normal of the point in the point cloud by the Euclidean distance weighted principal component analysis method in the step 1 specifically comprises the following steps: the covariance matrix is improved, and a distance weight coefficient w is added, as shown in the following formula:
wherein the distance weight coefficientr is the neighborhood radius;is piA neighborhood point of (d); m is the number of neighborhood points;represents a point piThe center of gravity of the neighborhood;is a pointAnd point piThe distance of (d); to E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
5. The method of claim 3, wherein the method comprises: the method for estimating the normal of the point in the point cloud by the Euclidean distance weighted principal component analysis method in the step 1 specifically comprises the following steps: the covariance matrix is improved, and a distance weight coefficient w is added, as shown in the following formula:
wherein the distance weight coefficientr is the neighborhood radius;is piA neighborhood point of (d); m is the number of neighborhood points;represents a point piThe center of gravity of the neighborhood;is a pointAnd point piThe distance of (d); to E3×3Performing matrix decomposition, and taking the eigenvector corresponding to the minimum eigenvalue as a point piThe normal vector of (2).
6. A method of voxel shape descriptor based point cloud identification as claimed in claim 1 or 2, wherein: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
7. According to claim 3The point cloud identification method based on the voxel shape descriptor is characterized in that: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
8. The method of claim 4, wherein the method comprises: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3Calculating a weighted average value of normals calculated based on 3 neighborhood radii according to the following formula, wherein s is the resolution of point cloud, and the average value n is used as a final normal;
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
9. The method of claim 5, wherein the method comprises: the method for calculating the final normal by using the multi-scale normal fusion algorithm in the step 3 specifically comprises the following steps: take 3 neighborhood radii r with equal spacing1=3*s、r2=5*s、r3=7*s,s is the resolution of the point cloud, the weighted average of the normals calculated based on the 3 neighborhood radii is calculated according to the following formula, and the average n is taken as the final normal;
wherein n is1、n2、n3Is a radius r1、r2、r3The normal obtained in step 1 and step 2 is used.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340995.1A CN111553409B (en) | 2020-04-27 | 2020-04-27 | Point cloud identification method based on voxel shape descriptor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340995.1A CN111553409B (en) | 2020-04-27 | 2020-04-27 | Point cloud identification method based on voxel shape descriptor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111553409A true CN111553409A (en) | 2020-08-18 |
CN111553409B CN111553409B (en) | 2022-11-01 |
Family
ID=72004431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010340995.1A Active CN111553409B (en) | 2020-04-27 | 2020-04-27 | Point cloud identification method based on voxel shape descriptor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111553409B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365592A (en) * | 2020-11-10 | 2021-02-12 | 大连理工大学 | Local environment feature description method based on bidirectional elevation model |
CN112446952A (en) * | 2020-11-06 | 2021-03-05 | 杭州易现先进科技有限公司 | Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium |
CN112669385A (en) * | 2020-12-31 | 2021-04-16 | 华南理工大学 | Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics |
CN112766037A (en) * | 2020-12-14 | 2021-05-07 | 南京工程学院 | 3D point cloud target identification and positioning method based on maximum likelihood estimation method |
CN113111741A (en) * | 2021-03-27 | 2021-07-13 | 西北工业大学 | Assembly state identification method based on three-dimensional feature points |
CN113435256A (en) * | 2021-06-04 | 2021-09-24 | 华中科技大学 | Three-dimensional target identification method and system based on geometric consistency constraint |
CN113469195A (en) * | 2021-06-25 | 2021-10-01 | 浙江工业大学 | Target identification method based on self-adaptive color fast point feature histogram |
CN113807366A (en) * | 2021-09-16 | 2021-12-17 | 电子科技大学 | Point cloud key point extraction method based on deep learning |
CN114118181A (en) * | 2021-08-26 | 2022-03-01 | 西北大学 | High-dimensional regression point cloud registration method, system, computer equipment and application |
CN116416305A (en) * | 2022-09-17 | 2023-07-11 | 上海交通大学 | Multi-instance pose estimation method based on optimized sampling five-dimensional point pair characteristics |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810751A (en) * | 2014-01-29 | 2014-05-21 | 辽宁师范大学 | Three-dimensional auricle point cloud shape feature matching method based on IsoRank algorithm |
CN105243374A (en) * | 2015-11-02 | 2016-01-13 | 湖南拓视觉信息技术有限公司 | Three-dimensional human face recognition method and system, and data processing device applying same |
CN105910556A (en) * | 2016-04-13 | 2016-08-31 | 中国农业大学 | Leaf area vertical distribution information extraction method |
US20170046868A1 (en) * | 2015-08-14 | 2017-02-16 | Samsung Electronics Co., Ltd. | Method and apparatus for constructing three dimensional model of object |
CN106846387A (en) * | 2017-02-09 | 2017-06-13 | 中北大学 | Point cloud registration method based on neighborhood rotary volume |
CN108830888A (en) * | 2018-05-24 | 2018-11-16 | 中北大学 | Thick matching process based on improved multiple dimensioned covariance matrix Feature Descriptor |
CN108898128A (en) * | 2018-07-11 | 2018-11-27 | 宁波艾腾湃智能科技有限公司 | A kind of method for anti-counterfeit and equipment matching digital three-dimemsional model by photo |
CN109887015A (en) * | 2019-03-08 | 2019-06-14 | 哈尔滨工程大学 | A kind of point cloud autoegistration method based on local surface feature histogram |
CN110222642A (en) * | 2019-06-06 | 2019-09-10 | 上海黑塞智能科技有限公司 | A kind of planar architectural component point cloud contour extraction method based on global figure cluster |
-
2020
- 2020-04-27 CN CN202010340995.1A patent/CN111553409B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810751A (en) * | 2014-01-29 | 2014-05-21 | 辽宁师范大学 | Three-dimensional auricle point cloud shape feature matching method based on IsoRank algorithm |
US20170046868A1 (en) * | 2015-08-14 | 2017-02-16 | Samsung Electronics Co., Ltd. | Method and apparatus for constructing three dimensional model of object |
CN105243374A (en) * | 2015-11-02 | 2016-01-13 | 湖南拓视觉信息技术有限公司 | Three-dimensional human face recognition method and system, and data processing device applying same |
CN105910556A (en) * | 2016-04-13 | 2016-08-31 | 中国农业大学 | Leaf area vertical distribution information extraction method |
CN106846387A (en) * | 2017-02-09 | 2017-06-13 | 中北大学 | Point cloud registration method based on neighborhood rotary volume |
CN108830888A (en) * | 2018-05-24 | 2018-11-16 | 中北大学 | Thick matching process based on improved multiple dimensioned covariance matrix Feature Descriptor |
CN108898128A (en) * | 2018-07-11 | 2018-11-27 | 宁波艾腾湃智能科技有限公司 | A kind of method for anti-counterfeit and equipment matching digital three-dimemsional model by photo |
CN109887015A (en) * | 2019-03-08 | 2019-06-14 | 哈尔滨工程大学 | A kind of point cloud autoegistration method based on local surface feature histogram |
CN110222642A (en) * | 2019-06-06 | 2019-09-10 | 上海黑塞智能科技有限公司 | A kind of planar architectural component point cloud contour extraction method based on global figure cluster |
Non-Patent Citations (4)
Title |
---|
WEI GUAN, WENTAO LI,YAN REN: "Wei Guan; WenTao Li; Yan Ren", 《 2018 CHINESE CONTROL AND DECISION CONFERENCE(CCDC)》 * |
刘丹丹,冯冬青: "基于表面特征的变电站设备三维点云识别", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
李自胜,丁国富: "点云数据处理与特征识别关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
陆军,邵红旭,王伟,范哲君,夏桂华: "基于关键点特征匹配的点云配准方法", 《北京理工大学学报》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446952B (en) * | 2020-11-06 | 2024-01-26 | 杭州易现先进科技有限公司 | Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium |
CN112446952A (en) * | 2020-11-06 | 2021-03-05 | 杭州易现先进科技有限公司 | Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium |
CN112365592A (en) * | 2020-11-10 | 2021-02-12 | 大连理工大学 | Local environment feature description method based on bidirectional elevation model |
CN112766037A (en) * | 2020-12-14 | 2021-05-07 | 南京工程学院 | 3D point cloud target identification and positioning method based on maximum likelihood estimation method |
CN112766037B (en) * | 2020-12-14 | 2024-04-19 | 南京工程学院 | 3D point cloud target identification and positioning method based on maximum likelihood estimation method |
CN112669385A (en) * | 2020-12-31 | 2021-04-16 | 华南理工大学 | Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics |
CN112669385B (en) * | 2020-12-31 | 2023-06-13 | 华南理工大学 | Industrial robot part identification and pose estimation method based on three-dimensional point cloud features |
CN113111741A (en) * | 2021-03-27 | 2021-07-13 | 西北工业大学 | Assembly state identification method based on three-dimensional feature points |
CN113111741B (en) * | 2021-03-27 | 2024-05-07 | 西北工业大学 | Assembly state identification method based on three-dimensional feature points |
CN113435256B (en) * | 2021-06-04 | 2022-04-26 | 华中科技大学 | Three-dimensional target identification method and system based on geometric consistency constraint |
CN113435256A (en) * | 2021-06-04 | 2021-09-24 | 华中科技大学 | Three-dimensional target identification method and system based on geometric consistency constraint |
CN113469195A (en) * | 2021-06-25 | 2021-10-01 | 浙江工业大学 | Target identification method based on self-adaptive color fast point feature histogram |
CN113469195B (en) * | 2021-06-25 | 2024-02-06 | 浙江工业大学 | Target identification method based on self-adaptive color quick point feature histogram |
CN114118181A (en) * | 2021-08-26 | 2022-03-01 | 西北大学 | High-dimensional regression point cloud registration method, system, computer equipment and application |
CN114118181B (en) * | 2021-08-26 | 2022-06-21 | 西北大学 | High-dimensional regression point cloud registration method, system, computer equipment and application |
CN113807366B (en) * | 2021-09-16 | 2023-08-08 | 电子科技大学 | Point cloud key point extraction method based on deep learning |
CN113807366A (en) * | 2021-09-16 | 2021-12-17 | 电子科技大学 | Point cloud key point extraction method based on deep learning |
CN116416305A (en) * | 2022-09-17 | 2023-07-11 | 上海交通大学 | Multi-instance pose estimation method based on optimized sampling five-dimensional point pair characteristics |
CN116416305B (en) * | 2022-09-17 | 2024-02-13 | 上海交通大学 | Multi-instance pose estimation method based on optimized sampling five-dimensional point pair characteristics |
Also Published As
Publication number | Publication date |
---|---|
CN111553409B (en) | 2022-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111553409B (en) | Point cloud identification method based on voxel shape descriptor | |
CN109887015B (en) | Point cloud automatic registration method based on local curved surface feature histogram | |
Zhong | Intrinsic shape signatures: A shape descriptor for 3D object recognition | |
CN102236794B (en) | Recognition and pose determination of 3D objects in 3D scenes | |
CN104715254B (en) | A kind of general object identification method merged based on 2D and 3D SIFT features | |
US8994723B2 (en) | Recognition and pose determination of 3D objects in multimodal scenes | |
JP5705147B2 (en) | Representing 3D objects or objects using descriptors | |
Tazir et al. | CICP: Cluster Iterative Closest Point for sparse–dense point cloud registration | |
CN111444767B (en) | Pedestrian detection and tracking method based on laser radar | |
CN114972459B (en) | Point cloud registration method based on low-dimensional point cloud local feature descriptor | |
CN108376408A (en) | A kind of three dimensional point cloud based on curvature feature quickly weights method for registering | |
CN114677418B (en) | Registration method based on point cloud feature point extraction | |
CN110930456A (en) | Three-dimensional identification and positioning method of sheet metal part based on PCL point cloud library | |
CN112116553B (en) | Passive three-dimensional point cloud model defect identification method based on K-D tree | |
CN103927511A (en) | Image identification method based on difference feature description | |
CN114200477A (en) | Laser three-dimensional imaging radar ground target point cloud data processing method | |
CN108537805A (en) | A kind of target identification method of feature based geometry income | |
CN111783722B (en) | Lane line extraction method of laser point cloud and electronic equipment | |
CN114358166B (en) | Multi-target positioning method based on self-adaptive k-means clustering | |
CN106951873B (en) | Remote sensing image target identification method | |
Himstedt et al. | Geometry matters: Place recognition in 2D range scans using Geometrical Surface Relations | |
Liu et al. | Robust 3-d object recognition via view-specific constraint | |
Lu et al. | Matching algorithm of 3D point clouds based on multiscale features and covariance matrix descriptors | |
CN115713627A (en) | Plane feature extraction method based on normal vector segmentation and region growing | |
CN111626096B (en) | Three-dimensional point cloud data interest point extraction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |