CN113838051B - Robot closed-loop detection method based on three-dimensional point cloud - Google Patents
Robot closed-loop detection method based on three-dimensional point cloud Download PDFInfo
- Publication number
- CN113838051B CN113838051B CN202111410898.6A CN202111410898A CN113838051B CN 113838051 B CN113838051 B CN 113838051B CN 202111410898 A CN202111410898 A CN 202111410898A CN 113838051 B CN113838051 B CN 113838051B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional
- point
- frame
- closed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 60
- 238000001914 filtration Methods 0.000 claims abstract description 19
- 238000005070 sampling Methods 0.000 claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 14
- 230000009467 reduction Effects 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 13
- 239000013598 vector Substances 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 238000000354 decomposition reaction Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/067—Reshaping or unfolding 3D tree structures onto 2D planes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a robot closed-loop detection method based on three-dimensional point cloud, which enables a robot to have the capability of identifying a scene once reached. The method is characterized in that point cloud voxel filtering is carried out to carry out point cloud voxel filtering to carry out point cloud downsampling on the point cloud aiming at the characteristics of large point cloud data volume and dense distribution in the three-dimensional point cloud, the point cloud number is reduced under the condition that the original point cloud distribution characteristics are kept unchanged, and the algorithm operation efficiency is improved. Meanwhile, PCA algorithm processing is carried out on the point cloud after down-sampling, the direction of a main shaft is determined, and the problem that closed-loop detection is affected by rotation uncertainty of the robot is solved. And then, carrying out dimensionality reduction on the point cloud processed by the PCA algorithm, converting the point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud into a two-dimensional image. And finally, accelerating to search candidate frames by using a data structure Kdtree, and in order to reduce the influence of discretization and noise on the two-dimensional image, performing local column movement in the main shaft direction, so that the closed-loop detection efficiency is improved, and meanwhile, the closed-loop detection accuracy is ensured.
Description
Technical Field
The invention belongs to the field of autonomous navigation of robots, and particularly relates to a closed-loop detection method for three-dimensional point cloud of a robot.
Background
The closed-loop detection is also called loop detection, and refers to the ability of the robot to identify a scene that has arrived once, so that the map is closed, that is, the robot matches the scanning of the current frame with the map that has been constructed during the mapping and positioning process. If loop detection is successful, the accumulated pose errors of the composition of the robot can be reduced, and the subsequent back-end optimization is seriously influenced by error detection. Compared with the visual closed-loop detection, the three-dimensional point cloud closed-loop detection has higher stability and is not influenced by factors such as seasons, time, illumination conditions and the like. However, due to lack of texture and color information, the number of point clouds is large, and laser closed-loop detection also has certain difficulty. Meanwhile, even if the robot returns to the same scene, due to the influence of uncertainty of the first direction rotation, the coordinate values of the acquired three-dimensional observation point cloud under the coordinate system of the laser sensor are different, and certain difficulty is brought to closed loop detection.
Disclosure of Invention
The invention aims to provide a robot closed-loop detection method based on three-dimensional point cloud aiming at the defects of the prior art. According to the invention, point cloud downsampling is carried out on original point cloud voxel filtering, and the point cloud amount needing to be processed is reduced on the basis of ensuring that the point cloud distribution characteristics are unchanged. And meanwhile, the sampled point clouds are processed through a PCA algorithm, so that the direction of a main shaft is determined, point cloud transformation is carried out according to the direction of the main shaft, and the point cloud distribution of the same scene is ensured to be the same. Organizing the point cloud after dimensionality reduction and rasterization by using a data structure Kdtree, and accelerating the search of candidate frames. In order to avoid the influence of noise and discretization on the data after dimensionality reduction, local column movement is carried out in the direction of a main shaft, and finally similarity detection is carried out on the moved distance image and the candidate frame.
The purpose of the invention is realized by the following technical scheme: a robot closed loop detection method based on three-dimensional point cloud specifically comprises the following steps:
(1) acquiring an original input point cloud, and performing down-sampling on voxel filtering of the original point cloud by utilizing the voxel filtering to obtain a down-sampled point cloud; the voxel filtering determines a three-dimensional voxel grid according to the range of the input original point cloud, and selects a point in each voxel in the three-dimensional voxel grid as the representative of all point clouds of the voxel;
(2) processing the point cloud obtained in the step (1) after being subjected to down-sampling through a PCA algorithm to determine the direction of a main shaft, and transforming the point cloud after being subjected to down-sampling according to the direction of the main shaft to obtain transformed point cloud data;
(3) projecting the transformed point cloud, converting the projected point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud subjected to coordinate conversion to obtain a point cloud grid occupation condition; generating a point cloud dimensionality reduction two-dimensional image and a vector ringKey according to the occupation condition of the point cloud grid;
(4) judging whether the accumulated frame number is greater than a threshold value; if the accumulated frame number is smaller than the threshold value, repeating the steps (1) to (3) until the accumulated frame number is larger than the threshold value;
(5) generating a data structure Kdtree according to the ringKey vector sets of all point cloud frames with the distance from the current frame number greater than the threshold;
(6) the current frame searches a candidate frame in a data structure Kdtree by using a vector ringKey of the current frame through a nearest neighbor searching algorithm;
(7) and carrying out local column movement on the two-dimensional image of the current frame in the main axis direction, comparing the similarity of the frame obtained after the local column movement with the candidate frame which is not subjected to the local column movement, and if the distance similarity is smaller than a closed loop detection threshold value, determining that the closed loop is formed.
Further, the step (2) includes the sub-steps of:
(2.1) processing the point cloud obtained in the step (1) after down-sampling by a PCA algorithm to determine the direction of a main shaft;
(2.2) taking the down-sampled point cloud output in the step (1) as an input sample setWherein the point cloud after down-sampling has a size ofM is the number of point clouds, i represents the ith point cloud, centralizing all input samples:
(2.5) extracting the eigenvectors corresponding to the maximum three eigenvaluesForming a new transformation matrix:
(2.6) downsampled point cloud and transformation matrixMultiplying to obtain new three-dimensional point cloud data after transformationComprises the following steps:
further, the step (3) includes the sub-steps of:
(3.1) performing two-dimensional projection on the transformed three-dimensional point cloud to obtain a two-dimensional coordinate point;
(3.2) converting the coordinate values of each coordinate point in the Cartesian coordinate system into coordinate values of a polar coordinate system according to the coordinate values of each coordinate point in the Cartesian coordinate system obtained by the two-dimensional projection in the step (3.1); rasterizing the point cloud after coordinate transformation to obtain a grid occupation condition;
setting a point in the point cloud as a Cartesian coordinate system coordinate valueThe coordinate value of the polar coordinate system isAfter rasterization, each bin has a size ofThen there are:
the coordinates of the point after rasterization are:
(3.3) carrying out value taking on the grid bin according to the occupation condition of the point cloud grid to generate a point cloud dimension reduction two-dimensional image;
and (3.4) obtaining a vector ringKey according to the point cloud dimension reduction two-dimensional image.
Further, the step (7) comprises the sub-steps of:
(7.1) realizing similarity detection by adding and averaging similarity detection of each column; each column of each frame of image is a vector, and the similarity calculation of the corresponding columns of different images is mainly measured by calculating the cosine value of the included angle of two vectors:
wherein:is the distance between the two images of the frame,as the current frame image, the image of the current frame,as the candidate frame images, the image of the frame,the number of columns of the image is,is the first of the current frameThe columns of the image data are,is the first of the candidate frameAnd column q is the current frame and c is the candidate frame.
(7.2) selecting the minimum value of the distance measurement as the measurement of the similarity of the point clouds of the two frames, and if the minimum value is smaller than a closed loop detection threshold value, considering that the two frames have closed loops:
wherein,and measuring the final distance between the current frame two-dimensional image and the candidate frame two-dimensional image, namely measuring the distance similarity between the current frame point cloud and the candidate frame point cloud.
The invention has the following beneficial effects:
(1) the method has the advantages that the input original point cloud is subjected to downsampling through voxel filtering, the number of the point clouds is reduced as much as possible under the condition that the point cloud distribution characteristics are not changed, computing resources are saved, and the algorithm operation efficiency is improved.
(2) The main shaft direction is determined according to the point cloud distribution characteristics through a PCA algorithm, and the point cloud after down sampling is converted, so that the problem that closed loop detection is affected by the change of the robot head direction is solved.
(3) And converting the transformed point cloud from a Cartesian coordinate system to a polar coordinate system, rasterizing the point cloud in the polar coordinate system, and taking the value of each grid as the maximum value of the z value in all point cloud coordinate values of the grid. Therefore, a two-dimensional image of each frame of point cloud is generated, and possibility is provided for subsequent real-time similarity detection.
(4) Extracting a ringKey vector according to the approximate condition of point cloud distribution at different distances of each frame of point cloud two-dimensional image, comparing the ringKey of historical frames with enough distance from the current frame, and finding out candidate frames with close distance.
(5) In order to reduce the influence of sensor noise and rasterization on closed-loop detection, the two-dimensional image of the current frame is subjected to local column movement in the main axis direction, and then is compared with the two-dimensional image of the candidate frame to find out the candidate frame meeting the closed-loop condition, namely, a closed-loop relation is formed, so that the accuracy of a closed-loop detection algorithm is improved.
Drawings
FIG. 1 is a general flowchart of a robot closed-loop detection method based on three-dimensional point cloud;
fig. 2 is a distribution comparison diagram before and after filtering of point cloud voxels, wherein (a) in fig. 2 is a point cloud amount schematic diagram before filtering, and (B) in fig. 2 is a point cloud amount schematic diagram after filtering;
FIG. 3 is a graph of laser observation data for the same scene at different times; fig. 3 (a) is a laser observation data map of a first frame point cloud, and fig. 3 (B) is a laser observation data map of a second frame point cloud;
fig. 4 is a point cloud distribution diagram after PCA processing is performed on laser observation data of the same scene at different times, wherein (a) in fig. 4 is a first frame point cloud schematic diagram after PCA processing, and (B) in fig. 4 is a second frame point cloud schematic diagram after PCA processing;
fig. 5 (a) is a point cloud laser observation data map processed by the PCA algorithm at a certain time, and fig. 5 (B) is a corresponding rasterized point cloud dimension reduction two-dimensional image;
fig. 6 is a comparison graph of results of the ScanContext closed-loop detection algorithm and the method of the present invention, wherein (a) in fig. 6 is a graph of the ScanContext closed-loop detection algorithm PR, and (B) in fig. 6 is a graph of the closed-loop detection algorithm PR of the present invention.
Detailed Description
Fig. 1 shows a flowchart of a robot closed-loop detection method based on three-dimensional point cloud, which includes the following steps:
(1) firstly, acquiring an original input point cloud, and performing down-sampling on voxel filtering of the original point cloud in order to accelerate the processing speed of the point cloud and improve the operation efficiency of the algorithm. Voxel filtering determines a three-dimensional voxel grid according to the range of input original point clouds, and each voxel in the three-dimensional voxel grid selects a point cloud as a representative of all point clouds in the voxel. Under the condition that the size of each voxel in the voxel grid is small, the effect of reducing the number of point clouds and simultaneously keeping the point cloud distribution characteristics can be achieved. As shown in fig. 2, the point clouds of the three-dimensional lidar observing the surrounding environment at a certain time are shown in fig. 2 (a) and fig. 2 (B) respectively represent the distribution and number of the point clouds before and after filtering, voxel filtering does not change the distribution of the point clouds, and the number of the point clouds is reduced, so as to improve the processing efficiency of the subsequent algorithm.
Wherein the size of each voxel in the three-dimensional voxel grid is 0.3m x 0.3m, all points in each voxel grid are approximated by the center of gravity of the point set in the voxel, in order to better compare the point cloud change before and after filtering, fig. 2 is the three-dimensional point cloud distribution after dimensionality reduction, i.e. the z value in all point cloud coordinate values is 0 m.
(2) Processing the point cloud obtained in the step (1) after being subjected to down-sampling through a PCA algorithm to determine the direction of a main shaft, and transforming the point cloud after being subjected to down-sampling according to the direction of the main shaft to obtain transformed point cloud data; the method specifically comprises the following substeps:
(2.1) in order to accelerate the closed loop detection speed of the three-dimensional point cloud, processing the point cloud obtained in the step (1) after down sampling by a PCA algorithm to determine the direction of a main shaft;
the pca (principal Component analysis) algorithm, which is a principal Component analysis method, is one of the most widely used data dimension reduction algorithms. The principal idea of PCA is to map dimensional features onto k dimensions, which are completely new orthogonal features, also called principal components, and to reconstruct k-dimensional features on the basis of the original n-dimensional features. The task of PCA is to sequentially find a set of mutually orthogonal axes from the original space, the selection of new axes being strongly dependent on the data itself. The first new coordinate axis is selected to be the direction with the largest square difference in the original data, the second new coordinate axis is selected to be the plane which is orthogonal to the first coordinate axis and enables the square difference to be the largest, and the third axis is the plane which is orthogonal to the 1 st axis and the 2 nd axis and enables the square difference to be the largest. By analogy, n such coordinate axes can be obtained. The method uses the PCA algorithm to determine the main axis of the point cloud mainly according to the point cloud distribution characteristics and independently from the observation attitude of the sensor. And under the condition that the new point cloud main shaft is determined, converting the original point cloud coordinates. Therefore, the final coordinate value of the point cloud is only related to the point cloud distribution characteristics and is not related to the observation posture of the sensor, and the purpose of detecting the rotation invariance of the three-dimensional point cloud in a closed loop mode is achieved.
(2.2) taking the down-sampled point cloud output in the step (1) as an input sample setWherein the point cloud after down-sampling has a size ofM is the number of point clouds, centering (unbiased) all input samples:
(2.5) extracting the eigenvectors corresponding to the maximum three eigenvaluesForming a new transformation matrix:
(2.6) downsampled point cloud and transformation matrixMultiplying to obtain new three-dimensional point cloud data after transformationComprises the following steps:
FIG. 3 shows the observation of the same scene under different robot poses, wherein the first directions of the two observation robots are opposite, and the poses of the robots at different moments are different. Fig. 4 is a point cloud obtained by performing PCA algorithm on two times of observation respectively, determining a main axis direction, and then performing point cloud conversion according to the main axis direction, wherein the converted point cloud is irrelevant to a robot observation first direction and only relevant to point cloud distribution characteristics in the surrounding environment, and since the two frames of point clouds observed are in the same scene at different moments, the point cloud distribution characteristics are basically the same, and point cloud coordinate values obtained by performing PCA algorithm conversion on the two frames of data are basically the same.
(3) And projecting the transformed three-dimensional point cloud, converting the projected point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud after coordinate transformation.
(3.1) performing two-dimensional projection on the transformed three-dimensional point cloud to obtain a two-dimensional coordinate point;
and (3.2) converting the coordinate values of each coordinate point in the Cartesian coordinate system, which are obtained by the two-dimensional projection in the step (3.1), into coordinate values of a polar coordinate system, and rasterizing the point cloud after coordinate transformation according to the coordinate values of each coordinate point in the polar coordinate system.
Setting a point in the point cloud as a Cartesian coordinate system coordinate valueThe coordinate value of the polar coordinate system is,Is a coordinate of the radius, and is,for the angular coordinate, the size of each bin after rasterization isThen there are:
the coordinates of the point after rasterization are:
obtaining the occupancy condition of the point cloud grid, wherein points in the grid bin fall into the grid bin, namely the occupancy condition is obtained; if no point in the grid bin falls in, no occupancy occurs.
And (3.3) after the current frame point cloud rasterization is finished, taking values of each bin after rasterization according to the point cloud raster occupation condition so as to achieve the purpose of generating a two-dimensional image by reducing the dimensions, wherein each bin takes the maximum value of the z values in all point cloud coordinate values falling into the current bin, and if no point falls into the bin, the value is 0.
And (3.4) obtaining a vector ringKey according to the point cloud dimension reduction two-dimensional image.
As shown in fig. 5, (a) in fig. 5 is a point cloud laser observation data map processed by PCA algorithm at a certain time, and (B) in fig. 5 is a corresponding rasterized point cloud dimension reduction two-dimensional image; and transforming a frame of point cloud by a coordinate system, rasterizing and finally reducing the dimension to form a two-dimensional image. Wherein the effective distance is 80m, the size of the two-dimensional image after the grid is 120 x 40,, 。
(4) obtaining accumulated frame numbers according to the frame numbers of all the three-dimensional point clouds, setting a threshold value in a self-defined mode, and judging whether the accumulated frame numbers are larger than the threshold value or not; if the accumulated frame number is smaller than the threshold value, repeating the steps (1) to (3) until the accumulated frame number is larger than the threshold value.
(5) And generating a data structure Kdtree according to the ringKey vector sets of all the point cloud frames with the distance from the current frame number greater than the threshold value.
(6) The current frame searches a candidate frame in the nearest neighbor of a data structure Kdtree by utilizing the vector ringKey of the current frame.
In order to accelerate the closed-loop detection speed, the historical frames which obviously do not accord with the closed-loop condition can be removed, and the historical frames which accord with a certain basic condition, also called candidate frames, are left for further similarity detection. The method for screening the candidate frames mainly uses a vector ringKey and a data structure Kdtree. When the three-dimensional point cloud generates a two-dimensional image, the number of non-empty grids in each line of the two-dimensional image is counted, namely the number of laser point bins in the grids is counted. By the method, each two-dimensional image is abstracted into a k-dimensional vector, wherein k is the line number of the two-dimensional image, and the corresponding vector is the ringKey. With the increase of the number of the point cloud frames, the continuously increased ringKey dynamically generates a Kdtree, and the search of the candidate frame is accelerated. The candidate frame is detected by using the ringKey vector of the current frame to find the nearest historical frame on the Kdtree by using Knn nearest neighbor search algorithm.
(7) And carrying out local column movement on the two-dimensional image of the current frame in the main axis direction, comparing the similarity of the frame after the local column movement with the candidate frame without the local column movement, and if the distance similarity is smaller than a closed loop detection threshold value, determining that the closed loop is formed.
And (4) acquiring a two-dimensional image of each frame of 3-dimensional point cloud, and converting closed-loop detection based on the three-dimensional point cloud into similarity detection of the two-dimensional image. The closed loop detection can be summarized as the detection of the similarity between the current frame and all historical frames at a certain distance from the current frame.
At the moment, the three-dimensional point cloud corresponding to the two-dimensional image is converted by determining the direction of the main shaft through the PCA algorithm, so that the point cloud distribution characteristics independent of the head direction of the robot are basically the same in the same scene, and the similarity of the two-dimensional image is higher. Although the environment is the same scene, the observation data acquired by the laser sensor is influenced by noise to a certain degree, and the point cloud is influenced by discretization in a polar coordinate system, so that two-dimensional images acquired by the observation data of the same environment at different moments can slightly translate along the column direction. The final closed loop detection is therefore a similarity detection between the candidate frame image and the image of the current frame with a local column shift around the main axis.
The method specifically comprises the following substeps:
(7.1) the similarity detection between images is mainly realized by the addition and the average of the similarity detection of each column. Each column of the image is a vector, and the similarity calculation of the corresponding columns of different images is mainly measured by calculating the cosine value of the included angle between the two vectors.
Wherein:is the distance between the two images of the frame,as the current frame image, the image of the current frame,as the candidate frame images, the image of the frame,the number of columns of the image is,is the first of the current frameThe columns of the image data are,is the first of the candidate frameAnd column q is the current frame and c is the candidate frame.
(7.2) because the three-dimensional point cloud generating the two-dimensional image is converted by the PCA algorithm, only the candidate frame image and the current frame image which is moved by the local column are required to be compared in similarity. And then selecting the minimum value of the distance measurement as the measurement of the similarity of the point clouds of the two frames, and if the minimum value is smaller than a closed loop detection threshold value, determining that the two frames have closed loops.
Wherein,and measuring the final distance between the current frame two-dimensional image and the candidate frame two-dimensional image, namely measuring the distance similarity between the current frame point cloud and the candidate frame point cloud.
The closed-loop detection algorithm provided by the invention is tested by using the KITTI data set 00 sequence, and compared with the ScanContext closed-loop detection algorithm, the same parameters related to the two algorithms are configured in the same way. The left graph in FIG. 6 is the ScanContext closed-loop detection result PR graph, and the running time of the whole algorithm is 1388.38 s. The right diagram of fig. 6 is a diagram of the result PR of the proposed closed-loop detection algorithm with the running time of 1057.35 s. By comparison, the closed-loop detection algorithm provided by the invention can improve the operation efficiency of the algorithm by about 23% on the basis of ensuring the same effect as the current closed-loop detection algorithm with a better effect.
In conclusion, the invention performs downsampling on the input original point cloud through voxel filtering, reduces the number of the point cloud as much as possible under the condition of ensuring that the distribution characteristics of the point cloud are not changed, saves computing resources and improves the operation efficiency of the algorithm. The invention determines the direction of the main shaft according to the point cloud distribution characteristics through the PCA algorithm, and transforms the point cloud after down sampling, thereby overcoming the problem of influence on closed loop detection caused by the change of the robot head direction. The invention converts the transformed point cloud from a Cartesian coordinate system to a polar coordinate system, performs rasterization processing on the point cloud in the polar coordinate system,each grid value is set to fall into the coordinate value of all point clouds of the gridThe value is maximum. Therefore, a two-dimensional image of each frame of point cloud is generated, and possibility is provided for subsequent real-time similarity detection. According to the method, the ring Key vector is extracted according to the approximate condition of point cloud distribution at different distances of each frame of point cloud two-dimensional image, the ring Key comparison is carried out on historical frames which are far enough from the current frame, candidate frames which are close to the current frame are found, and the searching efficiency can be greatly improved through a Kdtree and a nearest neighbor searching algorithm. In order to reduce the influence of sensor noise and rasterization on closed-loop detection, the method of the invention carries out local column movement on the current frame two-dimensional image in the main axis direction, and then compares the current frame two-dimensional image with the two-dimensional image of a candidate frame to find out the candidate frame meeting the closed-loop condition, namely forming a closed-loop relation, thereby improving the accuracy of a closed-loop detection algorithm.
Claims (4)
1. A robot closed loop detection method based on three-dimensional point cloud is characterized by comprising the following steps:
(1) acquiring an original input point cloud, and performing down-sampling on voxel filtering of the original point cloud by utilizing the voxel filtering to obtain a down-sampled point cloud; the voxel filtering determines a three-dimensional voxel grid according to the range of the input original point cloud, and selects a point in each voxel in the three-dimensional voxel grid as the representative of all point clouds of the voxel;
(2) processing the point cloud obtained in the step (1) after being subjected to down-sampling through a PCA algorithm to determine the direction of a main shaft, and transforming the point cloud after being subjected to down-sampling according to the direction of the main shaft to obtain transformed point cloud data;
(3) projecting the transformed point cloud, converting the projected point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud subjected to coordinate conversion to obtain a point cloud grid occupation condition; generating a point cloud dimensionality reduction two-dimensional image and a vector ringKey according to the occupation condition of the point cloud grid;
(4) judging whether the accumulated frame number is greater than a threshold value; if the accumulated frame number is less than the threshold value, repeating the steps (1) to (3) until the accumulated frame number is greater than the threshold value;
(5) generating a data structure Kdtree according to the ringKey vector sets of all point cloud frames with the distance from the current frame number greater than the threshold;
(6) the current frame searches a candidate frame in a data structure Kdtree by using a vector ringKey of the current frame through a nearest neighbor searching algorithm;
(7) and carrying out local column movement on the two-dimensional image of the current frame in the main axis direction, comparing the similarity of the frame obtained after the local column movement with the candidate frame which is not subjected to the local column movement, and if the distance similarity is smaller than a closed loop detection threshold value, determining that the closed loop is formed.
2. The method for detecting the closed loop of the robot based on the three-dimensional point cloud of claim 1, wherein the step (2) comprises the following sub-steps:
(2.1) processing the point cloud obtained in the step (1) after down-sampling by a PCA algorithm to determine the direction of a main shaft;
(2.2) taking the down-sampled point cloud output in the step (1) as an input sample setWherein, the point cloud scale after the downsampling is mx 3, m is the number of the point clouds, i represents the ith point cloud, centralizes all the input samples:
(2.3) calculating the covariance matrix cov (x) of the input sample set:
(2.4) carrying out eigenvalue decomposition on the covariance matrix cov (X) to obtain Q Lambda Q-1:
Cov(X)=QΛQ-1
(2.5) extracting a feature vector (Q) corresponding to the maximum three feature values(1),Q(2),Q(3)) Forming a new transformation matrix W:
W=[Q(1),Q(2),Q(3)]
(2.6) multiplying the down-sampled point cloud by a transformation matrix W to obtain new transformed three-dimensional point cloud data X' which is:
X′=XW。
3. the method for detecting the closed loop of the robot based on the three-dimensional point cloud of claim 1, wherein the step (3) comprises the following sub-steps:
(3.1) performing two-dimensional projection on the transformed three-dimensional point cloud to obtain a two-dimensional coordinate point;
(3.2) converting the coordinate values of each coordinate point in the Cartesian coordinate system into coordinate values of a polar coordinate system according to the coordinate values of each coordinate point in the Cartesian coordinate system obtained by the two-dimensional projection in the step (3.1); rasterizing the point cloud after coordinate transformation to obtain a grid occupation condition;
setting the coordinate value of a point Cartesian coordinate system in the point cloud as p (x, y, z), and the coordinate value of a polar coordinate system as pr=[r,θ]Each bin after rasterization has a size [ gap, angle]Then there are:
the coordinates of the point after rasterization are:
ps=[r/gap,θ/angle]
(3.3) carrying out value taking on the grid bin according to the occupation condition of the point cloud grid to generate a point cloud dimension reduction two-dimensional image;
and (3.4) obtaining a vector ringKey according to the point cloud dimension reduction two-dimensional image.
4. The method for closed-loop detection of a robot based on three-dimensional point cloud according to claim 1, wherein the step (7) comprises the following sub-steps:
(7.1) realizing similarity detection by adding and averaging similarity detection of each column; each column of each frame of image is a vector, and the similarity calculation of the corresponding columns of different images is mainly measured by calculating the cosine value of the included angle of two vectors:
wherein: d (I)q,Ic) Is the distance between two images, IqFor the current frame picture, IcAs candidate frame images, NsThe number of columns of the image is,for the jth column of the current frame,is the jth column of the candidate frame, q is the current frame, c is the candidate frame;
(7.2) selecting the minimum value of the distance measurement as the measurement of the similarity of the point clouds of the two frames, and if the minimum value is smaller than a closed loop detection threshold value, considering that the two frames have closed loops:
wherein D (I)q,Ic) And measuring the final distance between the current frame two-dimensional image and the candidate frame two-dimensional image, namely measuring the distance similarity between the current frame point cloud and the candidate frame point cloud.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111410898.6A CN113838051B (en) | 2021-11-25 | 2021-11-25 | Robot closed-loop detection method based on three-dimensional point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111410898.6A CN113838051B (en) | 2021-11-25 | 2021-11-25 | Robot closed-loop detection method based on three-dimensional point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113838051A CN113838051A (en) | 2021-12-24 |
CN113838051B true CN113838051B (en) | 2022-04-01 |
Family
ID=78971767
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111410898.6A Active CN113838051B (en) | 2021-11-25 | 2021-11-25 | Robot closed-loop detection method based on three-dimensional point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113838051B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109242873A (en) * | 2018-08-22 | 2019-01-18 | 浙江大学 | A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild |
US10634793B1 (en) * | 2018-12-24 | 2020-04-28 | Automotive Research & Testing Center | Lidar detection device of detecting close-distance obstacle and method thereof |
CN111929699A (en) * | 2020-07-21 | 2020-11-13 | 北京建筑大学 | Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110196044A (en) * | 2019-05-28 | 2019-09-03 | 广东亿嘉和科技有限公司 | It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method |
-
2021
- 2021-11-25 CN CN202111410898.6A patent/CN113838051B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109242873A (en) * | 2018-08-22 | 2019-01-18 | 浙江大学 | A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild |
US10634793B1 (en) * | 2018-12-24 | 2020-04-28 | Automotive Research & Testing Center | Lidar detection device of detecting close-distance obstacle and method thereof |
CN111929699A (en) * | 2020-07-21 | 2020-11-13 | 北京建筑大学 | Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system |
Non-Patent Citations (2)
Title |
---|
《Pose-Graph SLAM Using Forward-Looking Sonar》;Jie Li;《IEEE Robotics and Automation Letters》;20180227;第2330-2337页 * |
基于随机有限集的水下机器人同步定位与建图方法研究;范彦福;《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》;20210415;正文第1-92页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113838051A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113012203B (en) | High-precision multi-target tracking method under complex background | |
JP6216508B2 (en) | Method for recognition and pose determination of 3D objects in 3D scenes | |
JP5677798B2 (en) | 3D object recognition and position and orientation determination method in 3D scene | |
JP6681729B2 (en) | Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object | |
CN111583369A (en) | Laser SLAM method based on facial line angular point feature extraction | |
CN112907735B (en) | Flexible cable identification and three-dimensional reconstruction method based on point cloud | |
CN114677418B (en) | Registration method based on point cloud feature point extraction | |
JP5063776B2 (en) | Generalized statistical template matching based on geometric transformation | |
CN110930456A (en) | Three-dimensional identification and positioning method of sheet metal part based on PCL point cloud library | |
CN112651944B (en) | 3C component high-precision six-dimensional pose estimation method and system based on CAD model | |
CN113791400B (en) | Stair parameter autonomous detection method based on laser radar | |
CN111798453A (en) | Point cloud registration method and system for unmanned auxiliary positioning | |
CN111402330A (en) | Laser line key point extraction method based on plane target | |
CN111783722B (en) | Lane line extraction method of laser point cloud and electronic equipment | |
CN113168729A (en) | 3D shape matching method and device based on local reference coordinate system | |
CN109255815A (en) | A kind of object detection and recognition methods based on order spherical harmonic | |
CN117372489B (en) | Point cloud registration method and system for double-line structured light three-dimensional measurement | |
CN117629215A (en) | Chassis charging pile-returning method based on single-line laser radar point cloud registration | |
CN113838051B (en) | Robot closed-loop detection method based on three-dimensional point cloud | |
CN116952154A (en) | Method and system for measuring depth of mouth of detonating tube assembly based on machine vision | |
Shang et al. | Model-based tracking by classification in a tiny discrete pose space | |
JPH07146121A (en) | Recognition method and device for three dimensional position and attitude based on vision | |
CN113112471B (en) | Target detection method based on RI-HOG characteristics and rapid pyramid | |
CN114782689A (en) | Point cloud plane segmentation method based on multi-frame data fusion | |
JP2001143072A (en) | Object shape identifying apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |