CN107818598B - Three-dimensional point cloud map fusion method based on visual correction - Google Patents

Three-dimensional point cloud map fusion method based on visual correction Download PDF

Info

Publication number
CN107818598B
CN107818598B CN201710989642.2A CN201710989642A CN107818598B CN 107818598 B CN107818598 B CN 107818598B CN 201710989642 A CN201710989642 A CN 201710989642A CN 107818598 B CN107818598 B CN 107818598B
Authority
CN
China
Prior art keywords
point
point cloud
points
cloud map
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710989642.2A
Other languages
Chinese (zh)
Other versions
CN107818598A (en
Inventor
朱光明
张亮
沈沛意
宋娟
张笑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Innovation Institute of Xidian University
Original Assignee
Kunshan Innovation Institute of Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Innovation Institute of Xidian University filed Critical Kunshan Innovation Institute of Xidian University
Priority to CN201710989642.2A priority Critical patent/CN107818598B/en
Publication of CN107818598A publication Critical patent/CN107818598A/en
Application granted granted Critical
Publication of CN107818598B publication Critical patent/CN107818598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a three-dimensional point cloud map fusion method based on visual correction, which comprises the following steps: 1) processing two point cloud maps to be fused; 2) extracting 3D-SIFT key points of two three-dimensional point cloud maps; 3) extracting IPFH (IPFH) features on the 3D-SIFT key points; 4) searching for feature matching points by calculating Euclidean distances between feature points; 5) calculating a conversion matrix and rotating a point cloud map; 6) and fusing the two point cloud maps together by adopting an ICP algorithm. According to the method, SIFT features are expanded into three-dimensional point cloud, and 3D-SIFT key points are extracted, so that the robustness of feature to view angle rotation and transformation is guaranteed; by extracting the IPFH characteristics, the problem of wrong weight coefficient of the original FPFH characteristics is solved, and meanwhile, the characteristics of a three-dimensional point are represented by synthesizing the geometric characteristics of the neighborhood points, so that the stability of the algorithm is greatly improved. Through the processing, the method can fuse two three-dimensional point cloud maps with large visual angle difference.

Description

Three-dimensional point cloud map fusion method based on visual correction
Technical Field
The invention relates to the field of computer vision, mainly relates to three-dimensional point cloud map fusion, and particularly provides a method based on vision correction, which can be used for solving the problem of point cloud map fusion failure caused by overlarge visual angle difference.
Background
With the rapid development of stereo cameras, three-dimensional point cloud data has been widely applied in the field of computer vision, such as robot map construction, navigation, object recognition, and tracking. When the robot is in a wide external environment or a complex internal environment, although SLAM map construction can solve autonomous navigation of the robot, various problems may occur when the map is constructed due to complexity and universality of the environment, for example, navigation fails due to too long time consumed in the map construction process. Therefore, in this case, a plurality of robots are required to construct a map together, and then the respective maps are spliced into a complete map, so that the map sharing is realized, and the autonomous navigation efficiency can be improved.
The point cloud map fusion is as follows: and fusing point clouds from multiple visual angles together by using point cloud maps collected from different visual angles and different positions through rigid body transformation. Besl, P.J. & McKay, N.d. (1992). A method for registration of 3-D maps. IEEE Transactions on Pattern Analysis and Machine understanding, 14(2), 239-. The algorithm can obtain a better fusion result only when an accurate initial value is provided.
In order to calculate an accurate initial value, many researchers calculate the initial value by finding a feature matching. Klasing et al used methods based on optimized averaging to estimate normal vectors, such as the PlanePCA, PlaneSVD, QuadSVD methods. However, the normal vector of the point cloud contains less data information, and the change of the point cloud map view angle can cause the calculation of the normal vector of the point cloud, and finally, the fusion effect of the point cloud map is influenced. Gelfand provides a method for extracting a point cloud volume, however, the method cannot adapt to the point cloud map view transformation. Zou calculates the initial value by extracting the point cloud gaussian curvature, but the gaussian curvature of the same object changes with the change of the point cloud map view angle, which may result in wrong feature matching, and thus the point cloud map fusion fails. Rusu et al propose a method for extracting point cloud FPFH characteristics, but this method has the problems of weight coefficient overflow and characteristic matching error. The RANSAC algorithm is another fusion algorithm, originally proposed by fisherler et al, and adopts random sampling consistency to match a mathematical model, and the algorithm is stable but the complexity of the method can reach O (n3) in the worst case, and the algorithm time consumption is unacceptable when the number of point clouds is large.
Disclosure of Invention
The invention aims to provide a three-dimensional point cloud map fusion method based on visual correction, which is high in fusion precision and high in efficiency and aims to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a three-dimensional point cloud map fusion method based on visual correction comprises the following steps:
1) preprocessing a three-dimensional point cloud map, firstly inputting two three-dimensional point cloud map files in a pcd format so as to obtain two point cloud maps to be fused, wherein the two point cloud maps are respectively target point cloud map pointstgtAnd point cloud map point to be fusedsrcThen, preprocessing the two point cloud maps;
2) respectively extracting point of the target point cloud map for the two point cloud maps in the step 1)tgtAnd point cloud map point to be fusedsrc3D-SIFT key points of (1);
3) respectively extracting IPFH (Internet protocol video frequency) features from the two point cloud map 3D-SIFT key points in the step 2) to obtain point cloud map points to be fusedsrcFeature vector P and target point cloud map pointtgtA feature vector Q, wherein the feature vector P has m feature points, and the feature vector Q has n feature points;
4) calculating Euclidean distances of feature points in the feature vector P and the feature vector Q, and finding out feature matching points according to the minimum distance;
5) calculating a rotation matrix through the feature matching points, and using the matrix to map point of the point cloudtgtRotating to change the visual angle of the point cloud map and reduce the visual angle difference of the two point cloud maps;
6) and fusing the two three-dimensional point cloud maps with reduced view angle difference by adopting an ICP (inductively coupled plasma) accurate fusion algorithm.
As a further scheme of the invention: in step 1), the preprocessing includes denoising and sampling.
As a still further scheme of the invention: in the step 2), the specific steps of extracting the 3D-SIFT key points are as follows:
detecting extreme points in a point cloud scale space, carrying out downsampling on point cloud maps to different degrees to construct a pyramid model, and constructing a Gaussian scale space for each group of point cloud maps, wherein the functions are as follows
Scale space: l (x, y, z, σ) ═ G (x, y, z, k σ) × P (x, y, z);
gaussian difference function:
Figure GDA0002736448940000031
the parameter k points to the number of groups in the pyramid, and σ is the size of the scale;
constructing a Gaussian difference space: d (x, y, z, k)iσ)=L(x,y,z,ki+1σ)-L(x,y,z,kiσ);
After a DoG space is constructed, searching an extreme point by a comparison method: regarding a certain point, when the value of the certain point is larger than or smaller than all the points in the neighborhood, the certain point is regarded as an extreme point;
II, determining the direction of the key point, and calculating the azimuth angle and the elevation angle of each point in the neighborhood where the extreme point is located, wherein the calculation formula is as follows:
Figure GDA0002736448940000032
in the formula, d represents the distance between the extreme point and the center point of the neighborhood, theta represents the azimuth angle, phi represents the elevation angle, and x represents0,y0,z0Refers to the center point in the neighborhood of the extreme point;
counting the direction of the azimuth angle theta and the elevation angle phi of each point in the neighborhood: counting the directions of two angles of each point by adopting two histograms, wherein the direction dereferencing range of theta is [0 degrees and 360 degrees ], and counting the direction by adopting an 8-column histogram, wherein each column represents 45 degrees; and phi is taken as [ -90 degrees, 90 degrees ], the direction condition is counted by adopting a 4-column histogram, and each column represents 45 degrees;
after statistics, taking the peak values of the two histograms as the azimuth angle and the elevation angle of the extreme point, and after calculation, each key point comprises x, y and z coordinate information, scale information and direction information, namely (x, y, z, sigma, theta and phi);
III, rotating the coordinate system of a certain key point p to the main direction of the key point, and calculating by adopting the following rotation formula because the main direction is already calculated
Figure GDA0002736448940000033
Wherein p isiAs a point before rotation, pi' represents a point after rotation;
using rotated point pi'recalculating the several parameters mentioned above, including d, θ, φ, and then recalculating the rotated keypoint p' and the neighborhood point pi'geometric relationship between them, the geometric relationship being referred to as a vector p' pi'an angle with a normal vector n at a key point p before rotation is calculated using the formula where a numerator represents a dot product of two vectors and a denominator represents a length of the two vectors, and a triplet of data information (theta, phi,') is generated for a key point and all neighbor points in the neighborhood
Figure GDA0002736448940000041
And dividing three angles of the plurality of triples to generate a feature descriptor.
As a still further scheme of the invention: in step 3), the specific steps of extracting the IPFH characteristics in the region around the 3D-SIFT key point are as follows:
determining a k neighborhood by taking a 3D-SIFT key point p (x, y, z, sigma, theta and phi) as a center, constructing point pairs in the neighborhood by pairwise interconnection, and calculating a normal vector of each group of point pairs;
and II, constructing a local u-v-w coordinate system between each group of point pairs, and defining three coordinate axes in the following mode:
Figure GDA0002736448940000042
four characteristic values are calculated according to the local coordinate system and the normal vector, the calculation method is shown as the following formula, and the four characteristic values are called SPFH
Figure GDA0002736448940000043
And III, reselecting a k neighborhood from the two point cloud maps, and synthesizing the IPFH by utilizing the SPFH (specific pathogen free) close to each three-dimensional point, wherein the synthesis method is as follows:
Figure GDA0002736448940000051
ω0refers to the average, ω, of the distance between the query point p and each of the neighborhood pointsiRepresents a query point p and a neighboring point p in the neighborhood spaceiThe distance between them;
and IV, counting the value taking conditions of f1, f3 and f4 of each point, dividing each value into 11 intervals, and describing the IPFH by adopting 33-dimensional data information.
As a still further scheme of the invention: in step 4), IPFH (Internet protocol flash) feature matching is carried out, the corresponding relation is determined, the judgment standard is the distance between feature vectors, and the following matching strategy is adopted, namely, the point cloud map point is takensrcIPFH of a point A, point cloud map pointtgtFinding two points B and C with the nearest distance to the vector A, calculating by the following formula, if the formula condition is satisfied, selecting B as the matching point thereof
Figure GDA0002736448940000052
As a still further scheme of the invention: in step 5), after the characteristic matching points are found, calculating a conversion matrix by adopting a singular value decomposition method, and carrying out point cloud map point according to the conversion matrixsrcAnd rotating to reduce the visual angle difference between the two point cloud maps and finish the visual angle correction.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method, the 3D-SIFT is used for extracting key points, the FPFH (field programmable gate noise) features are calculated on the key points, and the initial rough estimation is calculated finally, so that an initial transformation matrix is provided for the traditional point cloud registration method, the iteration times of the registration algorithm can be reduced, the registration success rate and precision of the algorithm are improved, and the calculation cost is reduced.
2. According to the method, SIFT features are expanded into three-dimensional point cloud, and 3D-SIFT key points are extracted, so that the robustness of feature to view angle rotation and transformation is guaranteed; by extracting the IPFH characteristics, the problem of wrong weight coefficient of the original FPFH characteristics is solved, and meanwhile, the characteristics of a three-dimensional point are represented by synthesizing the geometric characteristics of the neighborhood points, so that the stability of the algorithm is greatly improved. Through the processing, the method can fuse two three-dimensional point cloud maps with large visual angle difference.
Drawings
Fig. 1 is a schematic flow chart of a three-dimensional point cloud map fusion method based on visual correction.
FIG. 2 is a schematic diagram of extracting key points by using 3D-SIFT in a three-dimensional point cloud map fusion method based on vision correction.
Fig. 3 is a schematic diagram of feature matching using FPFH in a three-dimensional point cloud map fusion method based on visual rectification.
Fig. 4 is an effect diagram of point cloud fusion performed by the three-dimensional point cloud map fusion method based on visual correction.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to specific embodiments.
Referring to fig. 1, a three-dimensional point cloud map fusion method based on vision correction includes the following steps:
1) preprocessing a three-dimensional point cloud map, firstly inputting two three-dimensional point cloud map files in a pcd format so as to obtain two point cloud maps to be fused, wherein the two point cloud maps are respectively target point cloud map pointstgtAnd point cloud map point to be fusedsrcThen, preprocessing the two point cloud maps, wherein the preprocessing comprises denoising and sampling;
2) respectively extracting point of the target point cloud map for the two point cloud maps in the step 1)tgtAnd point cloud map point to be fusedsrc3D-SIFT key points of (1);
the specific steps of 3D-SIFT key point extraction are as follows:
i, detecting extreme points in a point cloud scale space, carrying out downsampling on point cloud maps to different degrees to construct a pyramid model, constructing a Gaussian scale space for each group of point cloud maps, wherein the used functions are as follows,
scale space: l (x, y, z, σ) ═ G (x, y, z, k σ) × P (x, y, z);
gaussian difference function:
Figure GDA0002736448940000061
the parameter k points to the number of groups in the pyramid, and σ is the size of the scale;
constructing a Gaussian difference space: d (x, y, z, k)iσ)=L(x,y,z,ki+1σ)-L(x,y,z,kiσ);
After a DoG space is constructed, searching an extreme point by a comparison method: regarding a certain point, when the value of the certain point is larger than or smaller than all the points in the neighborhood, the certain point is regarded as an extreme point;
II, determining the direction of the key point, and calculating the azimuth angle and the elevation angle of each point in the neighborhood where the extreme point is located, wherein the calculation formula is as follows:
Figure GDA0002736448940000062
in the formula, d represents the distance between the extreme point and the center point of the neighborhood, theta represents the azimuth angle, phi represents the elevation angle, and x represents0,y0,z0Refers to the center point in the neighborhood of the extreme point;
counting the direction of the azimuth angle theta and the elevation angle phi of each point in the neighborhood: counting the directions of two angles of each point by adopting two histograms, wherein the direction dereferencing range of theta is [0 degrees and 360 degrees ], and counting the direction by adopting an 8-column histogram, wherein each column represents 45 degrees; and phi is taken as [ -90 degrees, 90 degrees ], the direction condition is counted by adopting a 4-column histogram, and each column represents 45 degrees;
after statistics, taking the peak values of the two histograms as the azimuth angle and the elevation angle of the extreme point, and after calculation, each key point comprises x, y and z coordinate information, scale information and direction information, namely (x, y, z, sigma, theta and phi);
III, rotating the coordinate system of a certain key point p to the main direction of the key point, and calculating by adopting the following rotation formula because the main direction is already calculated
Figure GDA0002736448940000071
Wherein p isiAs a point before rotation, pi' represents a point after rotation;
using rotated point pi'recalculating the several parameters mentioned above, including d, θ, φ, and then recalculating the rotated keypoint p' and the neighborhood point pi'geometric relationship between them, the geometric relationship being referred to as a vector p' pi'an angle with a normal vector n at a key point p before rotation is calculated using the formula where a numerator represents a dot product of two vectors and a denominator represents a length of the two vectors, and a triplet of data information (theta, phi,') is generated for a key point and all neighbor points in the neighborhood
Figure GDA0002736448940000072
Dividing three angles of a plurality of triples to generate a feature descriptor;
3) respectively extracting IPFH (Internet protocol video frequency) features from the two point cloud map 3D-SIFT key points in the step 2) to obtain point cloud map points to be fusedsrcFeature vector P and target point cloud map pointtgtA feature vector Q, wherein the feature vector P has m feature points, and the feature vector Q has n feature points;
the specific steps of extracting the IPFH characteristics in the region around the 3D-SIFT key point are as follows:
determining a k neighborhood by taking a 3D-SIFT key point p (x, y, z, sigma, theta and phi) as a center, constructing point pairs in the neighborhood by pairwise interconnection, and calculating a normal vector of each group of point pairs;
and II, constructing a local u-v-w coordinate system between each group of point pairs, and defining three coordinate axes in the following mode:
Figure GDA0002736448940000081
four characteristic values are calculated according to the local coordinate system and the normal vector, the calculation method is shown as the following formula, and the four characteristic values are called SPFH
Figure GDA0002736448940000082
And III, reselecting a k neighborhood from the two point cloud maps, and synthesizing the IPFH by utilizing the SPFH (specific pathogen free) close to each three-dimensional point, wherein the synthesis method is as follows:
Figure GDA0002736448940000083
ω0refers to the average, ω, of the distance between the query point p and each of the neighborhood pointsiRepresents a query point p and a neighboring point p in the neighborhood spaceiThe distance between them;
IV, counting the value conditions of f1, f3 and f4 of each point, dividing each value into 11 intervals, and describing the IPFH by adopting 33-dimensional data information;
4) calculating Euclidean distances of feature points in the feature vector P and the feature vector Q, and finding out feature matching points according to the minimum distance;
IPFH characteristic matching, determining corresponding relation, judging the distance between characteristic vectors as a standard, and adopting a matching strategy of taking point cloud map pointsrcIPFH of a point A, point cloud map pointtgtFinding two points B and C with the nearest distance to the A vector, passing throughCalculating by a surface formula, if formula conditions are met, selecting B as a matching point thereof
5) Calculating a rotation matrix through the feature matching points, and using the matrix to map point of the point cloudtgtRotating to change the visual angle of the point cloud map and reduce the visual angle difference of the two point cloud maps;
after finding the characteristic matching points, calculating a conversion matrix by adopting a singular value decomposition method, and carrying out point cloud map point according to the conversion matrixsrcRotating to reduce the visual angle difference between the two point cloud maps and finish the visual angle correction;
6) and fusing the two three-dimensional point cloud maps with reduced view angle difference by adopting an ICP (inductively coupled plasma) accurate fusion algorithm.
In fig. 2, an effect diagram of extracting key points by using 3D-SIFT in a three-dimensional point cloud map fusion method based on visual rectification is shown. The SIFT feature is an image local feature which keeps certain stability to rotation, scale scaling, brightness change invariance, view angle change, affine transformation and noise.
In fig. 3, a schematic diagram of further extracting FPFH features on key points extracted by 3D-SIFT and performing feature matching in a three-dimensional point cloud map fusion method based on visual correction is shown. FPFH is an improvement to PFH characteristics, is a point characteristic representation method for describing local geometric characteristic information around a sample point in the form of a statistical histogram, calculates Euclidean distances of characteristic points in source and target point clouds, finds characteristic matching points according to the minimum distance, and calculates a rotation matrix through the characteristic matching points.
In fig. 4, the test and experimental results in the three-dimensional point cloud map fusion method based on visual correction are shown, the experiment employs three sets of data sets, wherein the first two sets of data are from indoor point cloud data acquired by using an x tion Pro stereo sensor; another set is from the outdoor partial city point cloud provided by the PCL collected by the 3D laser scanner. The first column and the second column of the picture respectively represent source point clouds and target point clouds, the third column and the fourth column are point clouds after fusion, and the result after the two point clouds are fused is represented by different colors in the fourth column.
According to the method, the 3D-SIFT is used for extracting key points, the FPFH (field programmable gate noise) features are calculated on the key points, and the initial rough estimation is calculated finally, so that an initial transformation matrix is provided for the traditional point cloud registration method, the iteration times of the registration algorithm can be reduced, the registration success rate and precision of the algorithm are improved, and the calculation cost is reduced.
According to the method, SIFT features are expanded into three-dimensional point cloud, and 3D-SIFT key points are extracted, so that the robustness of feature to view angle rotation and transformation is guaranteed; by extracting the IPFH characteristics, the problem of wrong weight coefficient of the original FPFH characteristics is solved, and meanwhile, the characteristics of a three-dimensional point are represented by synthesizing the geometric characteristics of the neighborhood points, so that the stability of the algorithm is greatly improved. Through the processing, the method can fuse two three-dimensional point cloud maps with large visual angle difference.
While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (5)

1. A three-dimensional point cloud map fusion method based on visual correction is characterized by comprising the following steps:
1) preprocessing a three-dimensional point cloud map, firstly inputting two three-dimensional point cloud map files in a pcd format so as to obtain two point cloud maps to be fused, wherein the two point cloud maps are respectively target point cloud map pointstgtAnd point cloud map point to be fusedsrcThen, preprocessing the two point cloud maps;
2) respectively extracting point of the target point cloud map for the two point cloud maps in the step 1)tgtAnd point cloud map point to be fusedsrc3D-SIFT key points of (1);
3) for two points in step 2)Respectively extracting IPFH (internet protocol video frequency) features from the cloud map 3D-SIFT key points to obtain point cloud map points to be fusedsrcFeature vector P and target point cloud map pointtgtA feature vector Q, wherein the feature vector P has m feature points, and the feature vector Q has n feature points;
4) calculating Euclidean distances of feature points in the feature vector P and the feature vector Q, and finding out feature matching points according to the minimum distance;
5) calculating a rotation matrix through the feature matching points, and using the matrix to map point of the point cloudtgtRotating to change the visual angle of the point cloud map and reduce the visual angle difference of the two point cloud maps;
6) fusing the two three-dimensional point cloud maps with reduced view angle difference by adopting an ICP (inductively coupled plasma) accurate fusion algorithm;
in step 3), the specific steps of extracting the IPFH characteristics in the region around the 3D-SIFT key point are as follows:
determining a k neighborhood by taking a 3D-SIFT key point p (x, y, z, sigma, theta and phi) as a center, constructing point pairs in the neighborhood by pairwise interconnection, and calculating a normal vector of each group of point pairs;
and II, constructing a local u-v-w coordinate system between each group of point pairs, and defining three coordinate axes in the following mode:
Figure FDA0002736448930000011
four characteristic values are calculated according to the local coordinate system and the normal vector, the calculation method is shown as the following formula, and the four characteristic values are called SPFH
Figure FDA0002736448930000021
And III, reselecting a k neighborhood from the two point cloud maps, and synthesizing the IPFH by utilizing the SPFH (specific pathogen free) close to each three-dimensional point, wherein the synthesis method is as follows:
Figure FDA0002736448930000022
ω0refers to the average, ω, of the distance between the query point p and each of the neighborhood pointsiRepresents a query point p and a neighboring point p in the neighborhood spaceiThe distance between them;
and IV, counting the value taking conditions of f1, f3 and f4 of each point, dividing each value into 11 intervals, and describing the IPFH by adopting 33-dimensional data information.
2. The three-dimensional point cloud map fusion method based on visual correction as claimed in claim 1, wherein in step 1), the preprocessing includes denoising and sampling.
3. The three-dimensional point cloud map fusion method based on visual correction as claimed in claim 1, wherein in the step 2), the specific steps of 3D-SIFT key point extraction are as follows:
detecting extreme points in a point cloud scale space, carrying out downsampling on point cloud maps to different degrees to construct a pyramid model, and constructing a Gaussian scale space for each group of point cloud maps, wherein the functions are as follows
Scale space: l (x, y, z, σ) ═ G (x, y, z, k σ) × P (x, y, z);
gaussian difference function:
Figure FDA0002736448930000023
the parameter k points to the number of groups in the pyramid, and σ is the size of the scale;
constructing a Gaussian difference space: d (x, y, z, k)iσ)=L(x,y,z,ki+1σ)-L(x,y,z,kiσ);
After a DoG space is constructed, searching an extreme point by a comparison method: regarding a certain point, when the value of the certain point is larger than or smaller than all the points in the neighborhood, the certain point is regarded as an extreme point;
II, determining the direction of the key point, and calculating the azimuth angle and the elevation angle of each point in the neighborhood where the extreme point is located, wherein the calculation formula is as follows:
Figure FDA0002736448930000031
in the formula, d represents the distance between the extreme point and the center point of the neighborhood, theta represents the azimuth angle, phi represents the elevation angle, and x represents0,y0,z0Refers to the center point in the neighborhood of the extreme point;
counting the direction of the azimuth angle theta and the elevation angle phi of each point in the neighborhood: counting the directions of two angles of each point by adopting two histograms, wherein the direction dereferencing range of theta is [0 degrees and 360 degrees ], and counting the direction by adopting an 8-column histogram, wherein each column represents 45 degrees; and phi is taken as [ -90 degrees, 90 degrees ], the direction condition is counted by adopting a 4-column histogram, and each column represents 45 degrees;
after statistics, taking the peak values of the two histograms as the azimuth angle and the elevation angle of the extreme point, and after calculation, each key point comprises x, y and z coordinate information, scale information and direction information, namely (x, y, z, sigma, theta and phi);
III, rotating the coordinate system of a certain key point p to the main direction of the key point, and calculating by adopting the following rotation formula because the main direction is already calculated
Figure FDA0002736448930000032
Wherein p isiIs a point before rotation, p'iRepresents the rotated point;
using rotated point p'iRecalculating the aforementioned parameters, including d, θ, φ, and then calculating the rotated keypoint p and neighborhood point p'iThe geometric relationship between them is referred to as a vector p' pi'an angle with a normal vector n at a key point p before rotation is calculated using the formula where a numerator represents a dot product of two vectors and a denominator represents a length of the two vectors, and a triplet of data information (theta, phi,') is generated for a key point and all neighbor points in the neighborhood
Figure FDA0002736448930000033
And dividing three angles of the plurality of triples to generate a feature descriptor.
4. The three-dimensional point cloud map fusion method based on vision correction as claimed in claim 1, wherein in step 4), IPFH feature matching is performed to determine the corresponding relationship, the judgment criterion is the distance between feature vectors, and the following matching strategy is adopted, namely, point cloud map point is takensrcIPFH of a point A, point cloud map pointtgtFinding two points B and C with the nearest distance to the vector A, calculating by the following formula, if the formula condition is satisfied, selecting B as the matching point thereof
Figure FDA0002736448930000041
5. The three-dimensional point cloud map fusion method based on visual correction as claimed in claim 4, wherein in step 5), after the feature matching points are found, the transformation matrix is calculated by using a singular value decomposition method, and the point cloud map points are mapped according to the transformation matrixsrcAnd rotating to reduce the visual angle difference between the two point cloud maps and finish the visual angle correction.
CN201710989642.2A 2017-10-20 2017-10-20 Three-dimensional point cloud map fusion method based on visual correction Active CN107818598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710989642.2A CN107818598B (en) 2017-10-20 2017-10-20 Three-dimensional point cloud map fusion method based on visual correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710989642.2A CN107818598B (en) 2017-10-20 2017-10-20 Three-dimensional point cloud map fusion method based on visual correction

Publications (2)

Publication Number Publication Date
CN107818598A CN107818598A (en) 2018-03-20
CN107818598B true CN107818598B (en) 2020-12-25

Family

ID=61606939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710989642.2A Active CN107818598B (en) 2017-10-20 2017-10-20 Three-dimensional point cloud map fusion method based on visual correction

Country Status (1)

Country Link
CN (1) CN107818598B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11830207B2 (en) * 2020-05-18 2023-11-28 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus, electronic device and readable storage medium for point cloud data processing

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211167A (en) * 2019-06-14 2019-09-06 北京百度网讯科技有限公司 Method and apparatus for handling point cloud data
CN110930495A (en) * 2019-11-22 2020-03-27 哈尔滨工业大学(深圳) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
CN111310818B (en) * 2020-02-10 2021-05-18 贝壳找房(北京)科技有限公司 Feature descriptor determining method and device and computer-readable storage medium
CN111429574B (en) * 2020-03-06 2022-07-15 上海交通大学 Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN112414417B (en) * 2020-11-17 2021-11-26 智邮开源通信研究院(北京)有限公司 Automatic driving map generation method and device, electronic equipment and readable storage medium
CN112902905A (en) * 2021-01-20 2021-06-04 西安电子科技大学 High-definition 3D scanning-based ground object spectrum testing method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118059A (en) * 2015-08-19 2015-12-02 哈尔滨工程大学 Multi-scale coordinate axis angle feature point cloud fast registration method
CN106296693A (en) * 2016-08-12 2017-01-04 浙江工业大学 Based on 3D point cloud FPFH feature real-time three-dimensional space-location method
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10360469B2 (en) * 2015-01-15 2019-07-23 Samsung Electronics Co., Ltd. Registration method and apparatus for 3D image data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118059A (en) * 2015-08-19 2015-12-02 哈尔滨工程大学 Multi-scale coordinate axis angle feature point cloud fast registration method
CN106296693A (en) * 2016-08-12 2017-01-04 浙江工业大学 Based on 3D point cloud FPFH feature real-time three-dimensional space-location method
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Aligning Point Cloud Views using Persistent Feature Histograms;Radu Bogdan Rusu 等;《2008 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20081014;第3384-3391页 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11830207B2 (en) * 2020-05-18 2023-11-28 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus, electronic device and readable storage medium for point cloud data processing

Also Published As

Publication number Publication date
CN107818598A (en) 2018-03-20

Similar Documents

Publication Publication Date Title
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
Choi et al. Voting-based pose estimation for robotic assembly using a 3D sensor
US8830229B2 (en) Recognition and pose determination of 3D objects in 3D scenes
EP2720171B1 (en) Recognition and pose determination of 3D objects in multimodal scenes
Micusik et al. Descriptor free visual indoor localization with line segments
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
Yu et al. Robust robot pose estimation for challenging scenes with an RGB-D camera
CN104040590A (en) Method for estimating pose of object
CN107025449B (en) Oblique image straight line feature matching method constrained by local area with unchanged visual angle
CN114972459B (en) Point cloud registration method based on low-dimensional point cloud local feature descriptor
CN108305277B (en) Heterogeneous image matching method based on straight line segments
CN111145232A (en) Three-dimensional point cloud automatic registration method based on characteristic information change degree
CN104463953A (en) Three-dimensional reconstruction method based on inertial measurement unit and RGB-D sensor
CN111768447A (en) Monocular camera object pose estimation method and system based on template matching
Zhu et al. A review of 6d object pose estimation
Xu et al. Ring++: Roto-translation-invariant gram for global localization on a sparse scan map
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN117351078A (en) Target size and 6D gesture estimation method based on shape priori
CN114463396B (en) Point cloud registration method utilizing plane shape and topological graph voting
WO2023131203A1 (en) Semantic map updating method, path planning method, and related apparatuses
CN115423854B (en) Multi-view point cloud registration and point cloud fusion method based on multi-scale feature extraction
CN116416305B (en) Multi-instance pose estimation method based on optimized sampling five-dimensional point pair characteristics
Liu et al. Robust 3-d object recognition via view-specific constraint
CN117132630A (en) Point cloud registration method based on second-order spatial compatibility measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant