CN111710023A - Three-dimensional point cloud data feature point extraction method and application - Google Patents

Three-dimensional point cloud data feature point extraction method and application Download PDF

Info

Publication number
CN111710023A
CN111710023A CN202010548910.9A CN202010548910A CN111710023A CN 111710023 A CN111710023 A CN 111710023A CN 202010548910 A CN202010548910 A CN 202010548910A CN 111710023 A CN111710023 A CN 111710023A
Authority
CN
China
Prior art keywords
point
grid
point cloud
points
density value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010548910.9A
Other languages
Chinese (zh)
Inventor
熊彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Chengxiang Technology Co ltd
Original Assignee
Wuhan Chengxiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Chengxiang Technology Co ltd filed Critical Wuhan Chengxiang Technology Co ltd
Priority to CN202010548910.9A priority Critical patent/CN111710023A/en
Publication of CN111710023A publication Critical patent/CN111710023A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Abstract

The invention relates to a three-dimensional point cloud data feature point extraction method and application, wherein the extraction method specifically comprises the following steps: rasterizing point cloud data to be subjected to feature point extraction to obtain raster data, and then cutting and dividing the raster data to obtain a plurality of unit grids; analyzing and calculating the point density value of the original point cloud data contained in the unit grid; carrying out filtering and noise reduction processing on the density grid; calculating the density value gradient, the second derivative of the density value and the covariance matrix of the density value of the unit grid according to the density value of the grid; and constructing a characteristic point measurement function according to the density value covariance matrix of the unit grid, calculating a characteristic point measurement function value, comparing and judging the characteristic point measurement function value with a threshold value, and replacing the unit grid meeting the judgment condition by using grid points, wherein the grid points are the characteristic points. The number of the characteristic points extracted by the extraction method is reasonable, and the characteristic representativeness is obvious; the extracted feature points are applied to point cloud registration, and the registration efficiency and the registration precision are high.

Description

Three-dimensional point cloud data feature point extraction method and application
Technical Field
The invention relates to the field of three-dimensional vision, in particular to a method for extracting three-dimensional point cloud data characteristic points and application thereof.
Background
Since the depth camera is created by the invention, people get breakthrough development in various fields such as engineering measurement, industrial manufacturing, robot application and the like, taking industrial manufacturing as an example, and for decades, people gradually abandon the traditional method of calculating a measurement model by using a ruler and a pen, change the traditional method into a more intelligent digital measurement technology, and develop a digital point cloud model which is directly scanned and completely reused with reality after the depth camera is released. In the modern society, the point cloud reconstruction technology and various innovative applications thereof bring great convenience to the life of people, and have unlimited application value in various aspects of social manufacturing and creation such as home decoration, civil engineering, light industry, internet industry and the like, and the 3D point cloud is widely applied to various industries. In all scientific and technical applications depending on three-dimensional point cloud models, the accuracy of point cloud model modeling is the most important.
For point cloud data, at present, there is no camera, which can complete the repeat extension of a target model in one scan, but several stations perform registration reconstruction on scan data of the same model at different angles. However, point cloud data acquired by different scanning devices have differences, data points acquired by devices with lower accuracy are fewer, data points acquired by devices with higher accuracy are more, even millions or tens of millions of data points can be acquired by devices with higher accuracy, and the data volume is huge. Therefore, to ensure the efficiency of data processing, the point cloud data needs to be preprocessed. The preprocessing process usually comprises point cloud filtering, point cloud simplification, topological relation establishment and the like, and the existing point cloud simplification method is characterized in that feature point extraction work is carried out, or extracted points are not representative in features and large in quantity and are useless; or there is a problem that the extracted points are already feature representative but the number is small enough to be applied in registration. After the point cloud preprocessing stage is completed, the point cloud registration stage is started, at the moment, the quality of the result of the registration stage directly determines the quality of subsequent space science and technology work, the existing method still has a very large space for improving the time consumption and the registration precision of the registration, and the current effect efficiency is low.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a three-dimensional point cloud data feature point extraction method and application, and solves the technical problems that the feature point data quantity is too small and the registration application is not enough when the feature points are extracted by the existing point cloud simplification method, or the extracted feature points do not have feature representativeness.
The invention is realized by the following technical scheme:
a three-dimensional point cloud data feature point extraction method specifically comprises the following steps:
s1: rasterizing point cloud data to be subjected to feature point extraction to obtain raster data, and cutting and dividing raster data generated according to original point cloud data to obtain a plurality of continuously distributed unit grids;
s2: analyzing and calculating the point density value of the original point cloud data contained in the unit grid according to the position of the original point;
s3: carrying out filtering and noise reduction processing on the density grid with the density value not being 0;
s4: calculating the density value gradient, the second derivative of the density value and the covariance matrix of the density value of the unit grid according to the density value of the grid;
s5: constructing a characteristic point measurement function according to the density value covariance matrix of the unit grid, and calculating a characteristic point measurement function value;
s6: and comparing and judging the calculated characteristic point measurement function value with a threshold value, and replacing the unit grid meeting the judgment condition by a grid point, wherein the grid point is the characteristic point.
Further, the specific process of obtaining the raster data in S1 includes: traversing the input point cloud data to obtain X, Y, Z maximum values and minimum values in three dimensions, constructing a big cube which can just accommodate all points of the lower point cloud data according to the six maximum values, wherein the edge position of the big cube is the position of the most boundary point in the original input point cloud data;
the obtained grid data are divided according to the unit grid length set by people, and are divided into a plurality of small cubic unit grids with equal length, width and height.
Further, the specific step of calculating the density value of the unit point cloud in S2 includes:
s21: recording the unit grid at the leftmost lower corner of the big cube as a unit grid of number (0,0, 0);
s22: calculating the distance from each point in the original point cloud data to the center of the unit grid of number (0,0,0), and dividing each point into the unit grid to which each point belongs according to the multiple relation between the distance and the side length of the grid;
s23: weighting and calculating the points in all the original point cloud data of the same numbered unit grid according to the distance between the positions of the points and the center of the unit grid to which the points belong to obtain the point cloud density value of the unit grid
Figure BDA0002541770470000021
Figure BDA0002541770470000022
Where N is the number of original points contained within the current grid, ωjThe method adopts a Gaussian distribution rule, and carries out density weighted weight, X, according to the position relation of an original point and a grid centerjIs the three-dimensional coordinate information, X, of the current calculated point in the original point cloudiIs the three-dimensional coordinate information of the current grid center.
Further, the filtering methods adopted in S3 include median filtering and gaussian filtering, which are respectively used to remove salt-pepper noise and gaussian white noise in the density grid;
the specific process of median filtering includes: performing plane median filtering on the xoy plane, the xoz plane and the yoz plane respectively, performing cube template median filtering operation on the integral grid, adding the four filtering results, and taking the maximum value of the four filtering results to obtain the final result of median filtering for the unit grid density value which is changed for more than 2 times in the result;
the specific process of the gaussian filtering comprises: respectively performing Gaussian filtering on the xoy plane, the xoz plane and the yoz plane and the whole grid, adding the obtained four filtering results, and taking the maximum value of the four filtering results to obtain the final result of the Gaussian filtering for the unit grid density value which is changed for more than 2 times in the result;
the specific calculation formula adopted for calculating the four Gaussian filtering results is VGDμov(μ,v)=VDμov(μ, v) G (x, y), wherein G (x, y) is a Gaussian filter function,
Figure BDA0002541770470000031
VGD is the result of gaussian filtering of the density grid VD, and μ and v represent two coordinate axes of the selected plane, and (μ, v) represents coordinate values of the current calculation unit grid in the current coordinate relationship.
Further, in S4, a weighted five-point difference quotient method is used to calculate the density value gradient of the unit grid, specifically a formula:
Figure BDA0002541770470000032
wherein D (x) is the grid density value;
according to the grid density value gradient obtained by calculation, further using weighted five-point difference quotient formula
Figure BDA0002541770470000033
Calculating the second derivative of the density value, and then calculating Dxy、Dxz、Dyy、Dyz、Dzz
Constructing a covariance matrix using the calculated second derivative of density values
Figure BDA0002541770470000034
Further, the step of constructing the feature point metric function in S5 includes: the determinant value of the density value covariance matrix is recorded as Det (M), the diagonal trace of the matrix is recorded as Trace (M), and a characteristic point measurement function R is constructedD=detMD-α(traceMD)3In the formula α, an empirical constant is used.
Further, in S6, format conversion is performed on the unit cell data, and the central point of each unit cell is used to replace all original points included in the unit cell; in the format conversion process, the unit grid with the unit grid density value of 0 is discarded.
Further, after S6, a landmark position optimization is further included, where the specific step of landmark position optimization includes: and carrying out Taylor formula expansion on the characteristic point measurement function, carrying out interpolation and function curve fitting on the expansion formula, obtaining an extreme point of the fitting function, namely a new coordinate point for position updating, and carrying out position correction.
The method for extracting the characteristic points of the three-dimensional point cloud data is applied to point cloud data registration, and specifically comprises the following steps:
s01: extracting feature points of input data to be registered by adopting the extraction method;
s02: carrying out point cloud registration on the obtained characteristic points by using a registration algorithm, wherein the obtained registration result is the optimal rigid body transformation parameter;
s03: and applying the optimal rigid body transformation parameters of the feature point data registration result to the original point cloud data, and performing rigid body transformation on the original point cloud data to output point cloud data, namely the registered data.
Further, the registration algorithm in S02 is a Super-4PCS algorithm, the rigid body transformation parameter is in the form of a quaternion representation, and the solution method of the optimal rigid body transformation parameter is a unit quaternion method.
Compared with the prior art, the invention has the beneficial effects that:
the method for extracting the three-dimensional point cloud data feature points has reasonable number of the extracted feature points, does not generate too many points to lose the significance of extracting and simplifying the point cloud, does not generate too few points, and avoids the condition that the feature points cannot participate in registration due to too few number of the feature points; the extracted feature points are accurate in position and have high feature representativeness, and feature points cannot be generated on a flat area (plane) or omitted in feature areas such as corner points and the like; the method solves the problems of disordered positions, too small quantity and poor stability of the conventional method for calculating the feature points based on local normal vectors of point clouds or transferring the two-dimensional image to the three-dimensional point cloud to extract the feature points;
when the feature points extracted by the feature point extraction method are applied to point cloud registration, the consumption of point cloud registration time is obviously reduced, and after finally generated optimal rigid body transformation parameters are applied to original point cloud data, the goodness of fit of the point cloud is higher, the registration accuracy is improved, and the registration error is reduced.
Drawings
FIG. 1 is a schematic diagram of a point cloud data partitioning unit grid according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of positions of pixel points in a three-dimensional point cloud model according to an embodiment of the present invention;
fig. 3 is a graph of a grid point sparse effect corresponding to an original point due to different grid side lengths according to an embodiment of the present invention, where (a) the middle grid side length is 0.1m, and (b) the middle grid side length is 0.0001 m;
FIG. 4 is a comparison graph of the position relationship between the discrete extremum points and the true extremum of the fitting function according to the embodiment of the present invention;
FIG. 5 is a flowchart of a method for extracting feature points of three-dimensional point cloud data according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating feature point optimization according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating an application of the method for extracting feature points of three-dimensional point cloud data according to the embodiment of the present invention; (c) the original point cloud appearance is shown in (d), and the distribution condition of the characteristic points after the characteristic point extraction is carried out by using the method is shown in (d).
Detailed Description
The following examples are presented to illustrate certain embodiments of the invention in particular and should not be construed as limiting the scope of the invention. The present disclosure may be modified from materials, methods, and reaction conditions at the same time, and all such modifications are intended to be within the spirit and scope of the present invention.
As shown in fig. 5, the method for extracting feature points from three-dimensional point cloud data according to the present application realizes 2D to 3D migration of a feature point extraction algorithm in a two-dimensional image, and changes migration from an original image gray value to a point cloud normal vector to migration to a point cloud density value, and specifically includes the following steps:
s1: rasterizing point cloud data to be subjected to feature point extraction to obtain raster data, and cutting and dividing raster data generated according to original point cloud data to obtain a plurality of continuously distributed unit grids, as shown in fig. 1;
in this embodiment, the specific process of obtaining the raster data in S1 includes: traversing the input point cloud data to obtain X, Y, Z maximum values and minimum values in three dimensions, constructing a large cube VecDensity which can just accommodate all points of the lower point cloud data according to the six maximum values, wherein the edge position of the large cube is the position of the most boundary point in the original input point cloud data;
the obtained grid data are divided according to the length of the unit grid set manually, and are divided into a plurality of small cubic unit grids with equal length, width and height, the grid side length 'interval' is a constant parameter, and the parameter directly causes the number of the points reserved finally and the sparse situation of point distribution, and the parameter is 2 to 3 times of the point distance under the common situation.
S2: analyzing and calculating the point density value of the original point cloud data contained in the unit grid according to the position of the original point;
in this embodiment, the specific step of calculating the density value of the unit point cloud in S2 includes:
s21: recording the unit grid at the leftmost lower corner of the big cube as a unit grid of number (0,0, 0);
s22: calculating the distance from each point in the original point cloud data to the center of the unit grid of number (0,0,0), and dividing each point into the unit grid to which each point belongs according to the multiple relation between the distance and the side length of the grid;
s23: weighting and calculating the points in all the original point cloud data of the same numbered unit grid according to the distance between the positions of the points and the center of the unit grid to which the points belong to obtain the point cloud density value of the unit grid
Figure BDA0002541770470000051
Figure BDA0002541770470000052
Where N is the number of original points contained within the current grid, ωjAdopts the law of Gaussian distribution according to the originalWeight, X, for density weighting of the position relationship between the starting point and the center of the gridjIs the three-dimensional coordinate information, X, of the current calculated point in the original point cloudiIs the three-dimensional coordinate information of the current grid center.
If the density is directly assigned by the number of points in the unit grid, position information of a plurality of original points is lost, and because the three-dimensional information of the unit grid points behind the grid is taken as the center instead of the center of gravity, the direct use of the number of the points contained in the unit grid causes that some places with uneven point positions but coincidentally equal number of points lose the characteristics. The invention adopts a method of weighting and taking values according to the position information of the original point, well avoids the problems, and judges the contribution of the grid space to the density value according to the distance from the center to the center.
Each small cube unit grid within the cube VecDensity will have its density value information not 0 if it contains a point of the original point cloud, otherwise it will have its density value information 0 for those small cube unit grids that do not already contain the original point.
After calculating the density value of each unit grid through the above steps S21-S23, for the unit grid having a density value other than 0, it can be replaced by a new point, each point includes its own three-dimensional position information (x, y, z values are taken from the center position point of each unit grid) and the density value just calculated, and the grid point distribution will have different sparseness according to the setting of interval, as shown in fig. 3.
S3: carrying out filtering and noise reduction processing on the density grid with the density value not being 0;
one of the most likely problems for density grids is "holes", which arise for the following reasons: the original point cloud has the problems of light, shading, misoperation and the like, on a continuous plane with uniformly distributed points, the points are not generated at individual local positions, or the points at a certain small block are unevenly and disorderly distributed, and when the point cloud is subjected to density rasterization, the density grids generated at the positions are greatly different from surrounding areas and are mutation values, namely bugs in a dense net. However, such abrupt change regions are not the desired characteristic regions and need to be filtered.
Grid 'loopholes' have two types of noise points, namely salt and pepper noise and white gaussian noise, and usually median filtering and gaussian filtering are adopted to remove the two types of noise in a density grid respectively;
"salt and pepper noise points", which are irregularly distributed, are randomly occurring noise points and do not need to be preserved. For this type of noise point, the mean filtering may be selected because of the influence of the maximum and minimum values in the neighborhood, as if the image would be blurred in the two-dimensional image, and the density grid would be distorted after equalization, while the median filtering would not have this influence. After the density grid is subjected to median filtering for multiple times, the density grid finally tends to a fixed and unchangeable value, and in the obtained new density grid, holes on the smooth surface are replaced by neighborhood median, namely the salt and pepper noise points are eliminated.
In two-dimensional image processing, the template for median filtering is a square or circular window, which if migrated into a three-dimensional grid, would be a cube or sphere. However, the point cloud is often a scanning record of the object surface, and therefore, only the density value of the original object surface position in the density grid is not 0, the cubic template or the spherical template is continuously used, only the grid value of the middle layer may be not 0 in one 3 × 3 cubic template, after median filtering, the original non-0 grid value is changed into 0, which does not play a role in hole patching, but rather causes a larger hole, and thus the method is not available. The specific processing process adopting the median filtering comprises the following steps: performing plane median filtering on the xoy plane, the xoz plane and the yoz plane respectively, performing cube template median filtering operation on the integral grid, adding the four filtering results, and taking the maximum value of the four filtering results to obtain the final result of median filtering for the unit grid density value which is changed for more than 2 times in the result;
after median filtering, there still exist noise points of density values, which are not generated during point cloud acquisition, but are generated in a stage of rasterization according to density, and since point cloud distribution is not in the positive x direction or the positive y direction in a world geographic coordinate system, the weighting stage is influenced according to the distance from an original point to a center point of a grid during density calculation. The density grid fixed on the plane where the original point cloud is uniformly distributed will generate stripes or spots which are inclined and suddenly changed in the peripheral density value. If not processed, these streaks or blobs would be considered as feature points when the feature values are subsequently calculated. Gaussian noise can be effectively removed through Gaussian filtering, the density value of the center of the template is replaced by the weighted average density value of the pixels in the neighborhood determined by the Gaussian template, and the effective smooth noise reduction effect is generated on the density grid. We perform gaussian filtering processing on the density grid after several median filtering, and the specific process of the gaussian filtering includes: respectively performing Gaussian filtering on the xoy plane, the xoz plane and the yoz plane and the whole grid, adding the obtained four filtering results, and taking the maximum value of the four filtering results to obtain the final result of the Gaussian filtering for the unit grid density value which is changed for more than 2 times in the result;
the specific calculation formula adopted for calculating the Gaussian filtering result each time is VGDμov(μ,v)=VDμov(μ, v) G (x, y), wherein G (x, y) is a Gaussian filter function,
Figure BDA0002541770470000061
VGD is the result of gaussian filtering of the density grid VD, and μ and v represent two coordinate axes of the selected plane, and (μ, v) represents coordinate values of the current calculation unit grid in the current coordinate relationship.
After gaussian filtering, streak-like noise still exists, but due to the blurring effect of gaussian filtering, when the value of σ in gaussian filtering is chosen to be reasonably large enough, the abrupt change value thereof becomes very small, and at the same time, due to the abrupt change at the true corner point (feature point) due to the characteristic of itself, the great difference from the surrounding area can still be remained even if blurred by gaussian filtering. Finally, in the threshold filtering stage of the feature points, the stripes can be ignored, and the purpose of eliminating the noise influence is achieved.
S4: calculating the density value gradient, the second derivative of the density value and the covariance matrix of the density value of the unit grid according to the density value of the grid;
in a two-dimensional image, the elements I in the covariance matrixx,IyAnd respectively represent partial derivatives of the gray value I (x, y) of the two-dimensional image in the x direction and the y direction. In the three-dimensional image, when the density value of the three-dimensional point cloud is used for replacing the image gray value in the two-dimensional method, each element of the covariance changes;
constructing a gray-scale variation value measurement function of ∑ C (x, y, z; Δ x, Δ y, Δ z)x,y,zw(u,v,l)(d(u,v,l)-d(u+Δx,v+Δy,l+Δz))2Wherein w is the range window function in the calculation process on the grey value image and d is the grey value of the grey value image pixel;
the above formula is subjected to a first order Taylor expansion: d (u + Δ x, v + Δ y, 1+ Δ z) ≈ d (u, v, l) + dx (u, v, l) Δ x + dy(u,v,l)Δy+dz(u,v,l)Δz
C (x, y, z; delta x, delta y, delta z) is approximately equal to ∑ (I)x(u,v,l)Δx+Iy(u,v,l)Δy+Iz(u,v,l)Δz)2=[Δx,Δy,Δz]M(x,y,z)[ΔxΔyΔz]T
Wherein the covariance matrix M (x, y, z) of gray values in the three-dimensional image is represented as:
Figure BDA0002541770470000071
in the three-dimensional point cloud data, the covariance matrix of the density values can also be used for judging the feature points, wherein Dxx、Dxy、Dxz、Dyy、Dyz、DzzAnd the second derivative of the unit grid density value is obtained, and the calculation process is as follows:
calculating the density value gradient of the unit grid by adopting a weighted five-point difference quotient method, wherein the specific calculation formula is as follows:
Figure BDA0002541770470000072
wherein D (x) is a gateThe grid density value adopts a weighted five-point difference quotient formula to replace a two-point method central difference quotient method, and the weighted difference value of adjacent points near the point to be solved is combined to obtain a gradient value approaching the density value;
according to the density value gradient of a plurality of continuous unit grids obtained by calculation, further using weighted five-point difference quotient formula
Figure BDA0002541770470000073
Calculating the second derivative of the density value, and then calculating Dxy、Dxz、Dyy、Dyz、Dzz(ii) a Substituting a weighted five-point difference quotient such as a formula for a two-point method central difference quotient method to obtain a second-order derivative value which approaches to the density value by combining weighted difference values of adjacent points near the point to be solved;
constructing a covariance matrix using the calculated second derivative of density values
Figure BDA0002541770470000081
S5: constructing a characteristic point measurement function according to the density value covariance matrix of the unit grid, and calculating a characteristic point measurement function value;
in this embodiment, the density value covariance matrix reflects the correlation between variables, the eigenvectors thereof reflect the mutation directions of the variable values, and the eigenvalues thereof reflect the mutation degrees thereof. In general, the eigenvalues of the covariance matrix can reflect abrupt changes in the gray level values of the image. And comparing all the characteristic values of the covariance matrix of each pixel, and finally judging whether the current pixel point can be calculated as the characteristic point according to the comparison result. Calculating the eigenvalue and the eigenvector of the covariance matrix of each grid point, and comparing the three eigenvalues lambda of the covariance matrix1,λ2,λ3The relationship (2) of (c). The first condition is as follows: one of the three characteristic values is large and the other two values are small, i.e. lambda1>>λ2≈λ3At this time, it can be judged that the current point is on the plane of the three-dimensional point cloud model and cannot be used as a feature point; case two: two of the three characteristic values are large and one is small, i.e. lambda1≈λ2>>λ3At this time, the current point can be judged to be at the edge of the three-dimensional point cloud model, and if no special description exists, the edge point can be taken as a feature point; case two: all three eigenvalues are large, i.e. λ1≈λ2≈λ3> 0, it can be determined that the current point is at an angular point of the three-dimensional point cloud model, i.e., where the density value has abrupt changes in each direction, and in such a case, all points can be considered as feature points, as shown in fig. 2.
However, it is time and labor consuming to calculate the eigenvalues and eigenvectors of the covariance matrix of density values of each grid point. The characteristic points can be obtained by comparing and judging the function values with the threshold value through a method for constructing a characteristic point measurement function.
The step of constructing the feature point metric function includes: the determinant value of the density value covariance matrix is recorded as Det (M), the diagonal trace of the matrix is recorded as Trace (M), and a characteristic point measurement function R is constructedD=detMD-α(traceMD)3Where α is an empirical constant, typically between 0.01 and 0.05, the cubic power of the trace ensures that under α weighting it remains roughly the same order of magnitude as the determinant value.
The obtained function value RDComparing with an empirical constant Threshold, R at a certain grid pointDIf the grid point is greater than Threshold, the grid point is considered as a characteristic point, wherein the Threshold is an artificial input value, and the input is carried out during the running process of the program.
S6: and comparing and judging the calculated characteristic point measurement function value with a threshold value, and replacing the unit grid meeting the judgment condition by a grid point, wherein the grid point is the characteristic point.
In this embodiment, in S6, the unit cell data is subjected to format conversion, and the central point of each unit cell is used to replace all the original points included in the unit cell; in the format conversion process, the unit grid with the unit grid density value of 0 is discarded. The position information of the center point of the unit cell is determined by the boundaries of the macrocube and the number of the unit cell in S2.
After the extraction of the characteristic points of S6, the information of some noise points is often biased,The calculation process cannot automatically identify which values are not desirable, resulting in a final result that may result in a number of more erroneous or even erroneous result points due to one or several noise points. For these points, we cannot know what step of error is caused, and at the same time, the original data error of the calculation source cannot be corrected, so that when they are generated, it is impossible to obtain more accurate position information by recalculation according to the original calculation process, and feature point optimization is required. The existing optimization method comprises the following steps: a) calculating a point normal vector covariance matrix set NNT in the neighborhood, and calculating the inverse of the NNT as NNTInv; b) multiplying the normal vector covariance matrix of each point by the three-dimensional coordinates of the point, then summing, and recording as NNTp; c) judging and adjusting, and when the NNT is reversible, namely the NNTInv has a real matrix, performing optimization solution on the position of the current calculation point; d) iterative loop, for each current point, when new coordinates are present each time
Figure BDA0002541770470000091
And the coordinate value changed last time
Figure BDA0002541770470000092
Is a distance of
Figure BDA00025417704700000920
Figure BDA0002541770470000094
Or the iteration number reaches the maximum upper limit, terminating the loop, otherwise, repeating the steps (1) to (3). The optimization process is very dependent on the selection of the neighborhood range, and different neighborhood points can bring distinct optimization results.
For a discrete distribution of data points, the maxima detected in the discrete points are not necessarily the true maxima of the discrete data point fitting equation, as shown in fig. 4. The selection of the characteristic points is obtained according to a characteristic point metric value formula and threshold judgment, and the larger the characteristic point metric value is, the better the characteristic representativeness of the point is. Thus, for discretely distributed points, the point that best represents the local feature is not necessarily at the location of the existing point, but rather a point is judged to be able to better represent the local feature based on whether the feature point metric can approach the actual maximum of the function in the locally fitted functional relationship. As shown in fig. 6, the specific steps of the feature point position optimization of the present application include the following:
assuming that the difference between the extreme point coordinates in the fitting function and the current discrete point extreme point coordinates is Δ X ═ Δ X, Δ y, Δ z, then the feature point metric value of the actual extreme point should be expressed as:
Figure BDA0002541770470000095
wherein the first derivative vector of the density grid
Figure BDA0002541770470000096
And a second order partial derivative matrix
Figure BDA0002541770470000097
It is found in the previous stage, and can be directly used in the process.
When X + Δ X (hereinafter referred to as "X")
Figure BDA0002541770470000098
) At the very point of the extreme value, the value of the extreme value is,
Figure BDA0002541770470000099
obtaining a maximum value, and thus, obtaining
Figure BDA00025417704700000910
The Taylor expansion can be derived again, when the derivative is just 0, the derivative can be obtained
Figure BDA00025417704700000911
Figure BDA00025417704700000912
At this time
Figure BDA00025417704700000913
Namely, the new coordinate point of the position update obtained by the calculationWill be
Figure BDA00025417704700000914
And (5) returning to the Taylor expansion formula R (delta X), so as to obtain a new Harris characteristic point measurement value at the position of the new coordinate point:
Figure BDA00025417704700000915
in the whole iterative process, each time of solving
Figure BDA00025417704700000916
Then, Δ X should be calculated, when the value of any one element of Δ X, Δ y, Δ z is greater than 0.5 grid unit, which indicates that the maximum value point is closer to the neighbor grid, then position optimization is performed on the neighbor grid, otherwise, position optimization is performed by using the position point
Figure BDA00025417704700000917
The iteration is continued until the alternative X is obtained by two times
Figure BDA00025417704700000918
Is less than a given threshold or the number of iterations reaches an upper limit (typically set to 5), the iteration is stopped, during which, if so
Figure BDA00025417704700000919
If the value of (a) is less than the threshold value of the metric value of the feature point, the iteration is immediately stopped and the result closest to the last time is returned.
The characteristic points extracted by the three-dimensional point cloud data characteristic point extraction method are applied to point cloud data registration, and the method specifically comprises the following steps:
s01: extracting feature points of input data to be registered by adopting the extraction method;
for two point cloud sets P and Q waiting for initial registration, the improved feature point extraction proposed in the above stage is respectively carried out on the point set P and the point set Q, the feature of the original point cloud set is retained, and the point cloud data volume is greatly reduced. The appearance distribution of the characteristic points is illustrated in fig. 7. And (5) the extracted feature point distribution. The advantage of applying the feature point extraction algorithm proposed in the above stage is that the obtained feature point cloud set can maximally retain the distribution rule and features of the original point cloud on one hand, and can ensure that the number of extracted feature points is enough for registration on the other hand, and finally, the obtained feature point set is recorded as P 'and Q'.
The original point cloud sets P and Q are replaced by the point cloud sets P 'and Q', so that the number of points participating in registration is obviously reduced, compared with other three-dimensional point cloud characteristic point algorithms, the algorithm can better retain corner information, the retained points are more densely distributed, the requirement of the Super-4PCS algorithm on the number of points in the process of searching for a consistent four-point base is met, the original whole point cloud set does not need to be traversed when the four-point consistent set is searched, and time consumption is reduced. Meanwhile, as the number of points of the source point cloud set and the target point cloud set is greatly reduced, the four-point bases which can be found at present can be distributed on different wall surfaces in the point cloud, the occurrence probability of local small-range four-point bases is greatly reduced, the influence of repeated similar planes on the registration result is greatly avoided, and the registration precision is improved. Therefore, the efficiency of the Super4PCS algorithm in the process of searching the same-name point pairs is improved.
S02: carrying out point cloud registration on the obtained characteristic points by using a registration algorithm, wherein the obtained registration result is the optimal rigid body transformation parameter;
calculating rigid body transformation parameters of the homonymous point pairs found in P 'and Q', namely, each corresponding four-point consistent set in the source point set Q 'and the target point set P' has one rigid body transformation result RiEach R isiAnd applying the rigid body transformation to the whole point sets P 'and Q', and then calculating the root mean square error of Euclidean distances between corresponding points in the two point sets to measure the quality of the rigid body transformation:
Figure BDA0002541770470000101
wherein N is the number of corresponding points, and X represents the three-dimensional coordinates of the points. For rigid body transformations R with minimum RMSE values0I.e. the optimal transformation parameters.
S03: and applying the optimal rigid body transformation parameters of the feature point data registration result to the original point cloud data, and performing rigid body transformation on the original point cloud data to output point cloud data, namely the registered data.
Calculating the optimal transformation parameter R0The method is applied to original point cloud sets P and Q, the source point cloud set subjected to rigid body transformation is recorded as Q ', and the output Q' is the registration result to be obtained. And simultaneously constructing a new error evaluation formula:
Figure BDA0002541770470000102
in the formula, N is the number of the P point concentration points,
Figure BDA0002541770470000103
and
Figure BDA0002541770470000104
and three-dimensional coordinate information respectively representing corresponding points with the shortest distance in the source point set and the target point set.
In conclusion, the three-dimensional point cloud data feature points extracted by the feature point extraction method have the advantages of being suitable for registration in number, high in position accuracy and high in feature representativeness; the characteristic points extracted by the extraction method are used for registration, so that the registration efficiency and precision can be improved.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the present specification, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A three-dimensional point cloud data feature point extraction method is characterized by comprising the following steps:
s1: rasterizing point cloud data to be subjected to feature point extraction to obtain raster data, and cutting and dividing raster data generated according to original point cloud data to obtain a plurality of continuously distributed unit grids;
s2: analyzing and calculating the point density value of the original point cloud data contained in the unit grid according to the position of the original point;
s3: carrying out filtering and noise reduction processing on the density grid with the density value not being 0;
s4: calculating the density value gradient, the second derivative of the density value and the covariance matrix of the density value of the unit grid according to the density value of the grid;
s5: constructing a characteristic point measurement function according to the density value covariance matrix of the unit grid, and calculating a characteristic point measurement function value;
s6: and comparing and judging the calculated characteristic point measurement function value with a threshold value, and replacing the unit grid meeting the judgment condition by a grid point, wherein the grid point is the characteristic point.
2. The method of claim 1, wherein the specific process of obtaining the grid data in S1 includes: traversing the input point cloud data to obtain X, Y, Z maximum values and minimum values in three dimensions, constructing a big cube which can just accommodate all points of the lower point cloud data according to the six maximum values, wherein the edge position of the big cube is the position of the most boundary point in the original input point cloud data;
the obtained grid data are divided according to the unit grid length set by people, and are divided into a plurality of small cubic unit grids with equal length, width and height.
3. The method of claim 2, wherein the step of calculating the density value of the unit point cloud in S2 comprises:
s21: recording the unit grid at the leftmost lower corner of the big cube as a unit grid of number (0,0, 0);
s22: calculating the distance from each point in the original point cloud data to the center of the unit grid of number (0,0,0), and dividing each point into the unit grid to which each point belongs according to the multiple relation between the distance and the side length of the grid;
s23: for all original points in the same numbered unit gridThe points in the cloud data are weighted and calculated according to the distance between the positions of the points and the center of the unit grid to which the points belong, and the point cloud density value of the unit grid is obtained
Figure FDA0002541770460000011
Figure FDA0002541770460000012
Where N is the number of original points contained within the current grid, ωjThe method adopts a Gaussian distribution rule, and carries out density weighted weight, X, according to the position relation of an original point and a grid centerjIs the three-dimensional coordinate information, X, of the current calculated point in the original point cloudiIs the three-dimensional coordinate information of the current grid center.
4. The method for extracting the feature points of the three-dimensional point cloud data according to claim 1, wherein the filtering methods adopted in S3 include median filtering and gaussian filtering, which are respectively used for removing salt and pepper noise and gaussian white noise in the density grid;
the specific process of median filtering includes: performing plane median filtering on the xoy plane, the xoz plane and the yoz plane respectively, performing cube template median filtering operation on the integral grid, adding the four filtering results, and taking the maximum value of the four filtering results to obtain the final result of median filtering for the unit grid density value which is changed for more than 2 times in the result;
the specific process of the gaussian filtering comprises: respectively performing Gaussian filtering on the xoy plane, the xoz plane and the yoz plane and the whole grid, adding the obtained four filtering results, and taking the maximum value of the four filtering results to obtain the final result of the Gaussian filtering for the unit grid density value which is changed for more than 2 times in the result;
the specific calculation formula adopted for calculating the four Gaussian filtering results is VGDμov(μ,v)=VDμov(μ, v) G (x, y), wherein G (x, y) is a Gaussian filter function,
Figure FDA0002541770460000021
VGD is the result of gaussian filtering of the density grid VD, and μ and v represent two coordinate axes of the selected plane, and (μ, v) represents coordinate values of the current calculation unit grid in the current coordinate relationship.
5. The method for extracting the feature points of the three-dimensional point cloud data of claim 1, wherein in S4, a weighted five-point difference quotient method is used to calculate the density value gradient of the unit grid, and the specific calculation formula is as follows:
Figure FDA0002541770460000022
wherein D (x) is the grid density value;
according to the grid density value gradient obtained by calculation, further using weighted five-point difference quotient formula
Figure FDA0002541770460000023
Calculating the second derivative of the density value, and then calculating Dxy、Dxz、Dyy、Dyz、Dzz
Constructing a covariance matrix using the calculated second derivative of density values
Figure FDA0002541770460000024
6. The method of claim 1, wherein the step of constructing the feature point metric function in S5 comprises: the determinant value of the density value covariance matrix is recorded as Det (M), the diagonal trace of the matrix is recorded as Trace (M), and a characteristic point measurement function R is constructedD=detMD-α(traceMD)3In the formula α, an empirical constant is used.
7. The method of claim 1, wherein in step S6, the unit cell data is converted into format, and the central point of each unit cell is used to replace all original points contained in the unit cell; in the format conversion process, the unit grid with the unit grid density value of 0 is discarded.
8. The method for extracting feature points of three-dimensional point cloud data according to claim 1, further comprising feature point location optimization after S6, wherein the feature point location optimization specifically comprises: and carrying out Taylor formula expansion on the characteristic point measurement function, carrying out interpolation and function curve fitting on the expansion formula, obtaining an extreme point of the fitting function, namely a new coordinate point for position updating, and carrying out position correction.
9. The method for extracting the three-dimensional point cloud data feature points according to claim 1 is applied to point cloud data registration, and is characterized by comprising the following steps:
s01, extracting feature points of the input data to be registered by the extraction method of claim 1;
s02: carrying out point cloud registration on the obtained characteristic points by using a registration algorithm, wherein the obtained registration result is the optimal rigid body transformation parameter;
s03: and applying the optimal rigid body transformation parameters of the feature point data registration result to the original point cloud data, and performing rigid body transformation on the original point cloud data to output point cloud data, namely the registered data.
10. The application of claim 9, wherein the registration algorithm in S02 is a Super-4PCS algorithm, the rigid body transformation parameter form is a quaternion representation, and the solution method of the optimal rigid body transformation parameter is a unit quaternion method.
CN202010548910.9A 2020-06-16 2020-06-16 Three-dimensional point cloud data feature point extraction method and application Pending CN111710023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010548910.9A CN111710023A (en) 2020-06-16 2020-06-16 Three-dimensional point cloud data feature point extraction method and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010548910.9A CN111710023A (en) 2020-06-16 2020-06-16 Three-dimensional point cloud data feature point extraction method and application

Publications (1)

Publication Number Publication Date
CN111710023A true CN111710023A (en) 2020-09-25

Family

ID=72540258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010548910.9A Pending CN111710023A (en) 2020-06-16 2020-06-16 Three-dimensional point cloud data feature point extraction method and application

Country Status (1)

Country Link
CN (1) CN111710023A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415490A (en) * 2021-01-25 2021-02-26 天津卡雷尔机器人技术有限公司 3D point cloud scanning device based on 2D laser radar and registration algorithm
CN112819869A (en) * 2021-01-22 2021-05-18 辽宁工程技术大学 Three-dimensional point cloud registration method based on IHarris-TICP algorithm
CN113137919A (en) * 2021-04-29 2021-07-20 中国工程物理研究院应用电子学研究所 Laser point cloud rasterization method
CN115032615A (en) * 2022-05-31 2022-09-09 中国第一汽车股份有限公司 Laser radar calibration point determining method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120256916A1 (en) * 2009-12-11 2012-10-11 Kazuo Kitamura Point cloud data processing device, point cloud data processing method, and point cloud data processing program
CN107038717A (en) * 2017-04-14 2017-08-11 东南大学 A kind of method that 3D point cloud registration error is automatically analyzed based on three-dimensional grid

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120256916A1 (en) * 2009-12-11 2012-10-11 Kazuo Kitamura Point cloud data processing device, point cloud data processing method, and point cloud data processing program
CN107038717A (en) * 2017-04-14 2017-08-11 东南大学 A kind of method that 3D point cloud registration error is automatically analyzed based on three-dimensional grid

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何丽;李嘉;郑德华;: "基于栅格的点云数据的边界探测方法", 测绘工程, no. 03 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819869A (en) * 2021-01-22 2021-05-18 辽宁工程技术大学 Three-dimensional point cloud registration method based on IHarris-TICP algorithm
CN112415490A (en) * 2021-01-25 2021-02-26 天津卡雷尔机器人技术有限公司 3D point cloud scanning device based on 2D laser radar and registration algorithm
CN113137919A (en) * 2021-04-29 2021-07-20 中国工程物理研究院应用电子学研究所 Laser point cloud rasterization method
CN113137919B (en) * 2021-04-29 2022-10-28 中国工程物理研究院应用电子学研究所 Laser point cloud rasterization method
CN115032615A (en) * 2022-05-31 2022-09-09 中国第一汽车股份有限公司 Laser radar calibration point determining method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111710023A (en) Three-dimensional point cloud data feature point extraction method and application
CN109934826B (en) Image feature segmentation method based on graph convolution network
CN109685080B (en) Multi-scale plane extraction method based on Hough transformation and region growth
CN107146280B (en) Point cloud building reconstruction method based on segmentation
CN100559398C (en) Automatic deepness image registration method
CN115761172A (en) Single building three-dimensional reconstruction method based on point cloud semantic segmentation and structure fitting
CN108038906B (en) Three-dimensional quadrilateral mesh model reconstruction method based on image
Tang et al. Delicate textured mesh recovery from nerf via adaptive surface refinement
CN104331699B (en) A kind of method that three-dimensional point cloud planarization fast search compares
CN110246100B (en) Image restoration method and system based on angle sensing block matching
CN109615581B (en) Splicing recovery method of three-dimensional fragments fusing expanded Gaussian balls and color geometric features
CN112613097A (en) BIM rapid modeling method based on computer vision
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN110838115A (en) Ancient cultural relic three-dimensional model change detection method by contour line extraction and four-dimensional surface fitting
CN112037318A (en) Construction method and system of three-dimensional rock mass structure model and application of model
CN111754618B (en) Object-oriented live-action three-dimensional model multi-level interpretation method and system
CN112164145B (en) Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data
CN108230452B (en) Model hole filling method based on texture synthesis
CN109345571B (en) Point cloud registration method based on extended Gaussian image
CN114387329A (en) Building contour progressive regularization method based on high-resolution remote sensing image
CN112862736B (en) Real-time three-dimensional reconstruction and optimization method based on points
CN113409332A (en) Building plane segmentation method based on three-dimensional point cloud
CN116681844A (en) Building white film construction method based on sub-meter stereopair satellite images
Sumi et al. Multiple TLS point cloud registration based on point projection images
Georgiev et al. Real-time 3d scene description using spheres, cones and cylinders

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination