CN109887028B - Unmanned vehicle auxiliary positioning method based on point cloud data registration - Google Patents

Unmanned vehicle auxiliary positioning method based on point cloud data registration Download PDF

Info

Publication number
CN109887028B
CN109887028B CN201910020386.5A CN201910020386A CN109887028B CN 109887028 B CN109887028 B CN 109887028B CN 201910020386 A CN201910020386 A CN 201910020386A CN 109887028 B CN109887028 B CN 109887028B
Authority
CN
China
Prior art keywords
point cloud
scale
registration
sphere
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910020386.5A
Other languages
Chinese (zh)
Other versions
CN109887028A (en
Inventor
谢宗霞
王而川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910020386.5A priority Critical patent/CN109887028B/en
Publication of CN109887028A publication Critical patent/CN109887028A/en
Application granted granted Critical
Publication of CN109887028B publication Critical patent/CN109887028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an unmanned vehicle auxiliary positioning method based on point cloud data registration, which comprises the following steps: the method comprises the steps that an unmanned vehicle acquires a panoramic point cloud data map, namely large-scale global point cloud data, in a specified outdoor scene in advance; in the driving process of the unmanned vehicle, scanning point cloud data, namely small-scale local point clouds, of the positions where the unmanned vehicle is located through a laser radar, registering the small-scale local point clouds and the large-scale global point clouds by utilizing a registration algorithm of the point cloud data, and acquiring registration position coordinates of the small-scale local point clouds in the large-scale global point clouds and a deflection and translation matrix; finally, calculating the position of the automobile without passengers in the large-scale global point cloud map by means of the registration position coordinates and the deflection and translation matrixes; the invention mainly utilizes the laser point cloud data of the unmanned vehicle to realize the registration of the current small-scale local point cloud and the large-scale global point cloud, thereby realizing the positioning.

Description

Unmanned vehicle auxiliary positioning method based on point cloud data registration
Technical Field
The invention relates to an unmanned vehicle positioning technology, in particular to an unmanned vehicle auxiliary positioning method based on laser point cloud data registration, which is mainly used for realizing real-time positioning depending on laser point cloud data and realizing positioning in an area separated from a GPS.
Background
In recent years, along with the rapid development of artificial intelligence, unmanned vehicles and automatic driving are greatly concerned, the industry develops rapidly, and meanwhile, the positioning of unmanned vehicles is also particularly important. In some scenarios, the GPS signal is weak or disappears, and in other cases, how to implement positioning technology relying on laser point cloud becomes very important.
Disclosure of Invention
Therefore, the invention aims to provide a method for positioning an unmanned vehicle based on a laser point cloud data registration algorithm. The positioning method can be separated from sensors such as a GPS and the like, and only depends on the point cloud data of the laser, so that accurate positioning is realized.
The purpose of the invention is realized by the following technical scheme.
An unmanned vehicle auxiliary positioning method based on point cloud data registration comprises the following steps:
firstly, acquiring a panoramic point cloud data map, namely large-scale global point cloud data, in a specified outdoor scene by an unmanned vehicle in advance;
scanning point cloud data of the position, namely small-scale local point cloud, by using a laser radar in the driving process of the unmanned vehicle;
registering the small-scale local point cloud and the large-scale global point cloud by using a point cloud data registration algorithm, and acquiring a registration position coordinate, a deflection matrix and a translation matrix of the small-scale local point cloud in the large-scale global point cloud;
and step four, calculating the position of the unmanned vehicle in the large-scale global point cloud map by means of the registration position coordinates and the deflection and translation matrix.
The registration of the small-scale local point cloud and the large-scale global point cloud in the third step comprises the following steps:
coarse registration: the multi-scale sphere is used for replacing the traditional key points and the local linear structure to serve as the most basic unit for matching; the local three-dimensional geometry is encoded using a depth neural encoder. Obtaining position coordinates in large-scale global point cloud possibly registered with small-scale local point cloud and deflection and translation matrix through coarse registration
Fine registration: and on the basis of the position coordinates obtained by coarse registration and the deflection and translation matrixes, realizing the accurate registration of the two point cloud data by adopting an ICP (inductively coupled plasma) fine registration algorithm.
The rough registration stage of point cloud registration is to use a multi-scale sphere to replace a traditional key point and a local linear structure as the most basic unit for matching; coding a local three-dimensional geometrical structure by using a deep neural encoder, and further acquiring position coordinates which are possibly registered with small-scale local point clouds in large-scale global point clouds, and a deflection and translation matrix through rough registration; the coarse registration process for achieving point cloud registration is as follows:
3.1 covering small-scale local point clouds and large-scale global point clouds by using multi-scale random spheres;
3.2 selecting a normalized local coordinate system for each multi-scale sphere;
3.3. projecting the multi-scale sphere data onto a two-dimensional map;
3.4. performing significance detection and screening on the multi-scale spheres;
3.5, reducing the dimension of the multi-scale sphere by a deep neural network automatic encoder;
3.6 matching the relevant descriptors;
3.7 coarse registration with local search.
The step 3.1 is a process of covering small-scale local point cloud and large-scale global point cloud by using multi-scale random balls: :
4.1, setting the radius of the multi-scale sphere according to the point cloud density: r1, R2, R3
4.2 randomly selecting a point P which does not belong to any multi-scale sphere;
4.3 with P as the center, points in the sphere of R1 are all defined as a new sphere;
4.4, continuously circulating, selecting a new radius after covering complete point cloud data, and repeating the covering step again;
4.5 under the same radius, screening out crossed or covered spheres and establishing a multi-scale sphere set with three radii.
The 3.2 step selects a normalized local coordinate system process for each sphere:
5.1 establishing a coordinate system of a super-point set by using a singular value decomposition method for the pre-estimated covariance matrix of the points in the sphere range by taking the center of mass of the sphere as an origin;
5.2 the surface feature value contains two similarly sized large and one slightly smaller feature vectors, the points mainly clustered in two directions, the z-axis being the third feature vector;
5.3 calculate the average height of the discrete mirror slices of the sphere and insert them into the polar histogram, the x-axis is the direction of the maximum beam.
The process of projecting the multi-scale sphere data onto the two-dimensional graph in the 3.3 step:
6.1, converting the point cloud data in the sphere into a square discrete image form with side length of d;
6.2 projecting the Z-axis height of each point onto the depth map of each corresponding pixel;
6.3 zooming the sphere to a cube image with side length d 1;
6.4 remove the arc edge of the sphere in the depth map to scale the image to a square depth map with side length d.
And 3.4, performing significance detection and screening on the depth map after the multi-scale sphere mapping:
7.1 detection density, spheres less than N points are filtered out;
7.2 detecting the geometrical characteristics, the overtops reflecting the flat surface will be filtered out, i.e. the overtops with lower height;
7.3 significance detection, reshaping the super-point depth map into a row of depth vectors with the length d x d;
principal Component Analysis (PCA) is performed on the depth vector set, and only the set of the super points using the original three vectors will be filtered out.
3.5, reducing the dimension of the multi-scale sphere by a deep neural network automatic encoder;
8.1 deep learning neural networks can achieve the most advanced image compression, divided into encoders and decoders. The encoder starts with the input layer and connects to the hidden layer, where the dimensions are gradually reduced until the desired compact dimensions are reached; the decoder starts with a compact representation of the data and the hidden layer gradually increases in dimension until the output layer is the same as the input layer.
8.2, the dimensionality of the deep learning network is as follows: 1032 (Input), [1032,128] (L1), [128,10] (L2), [10,128] (L3), [128,1032] (L4), 1032 (output). This network structure is initially trained using a depth map containing one hundred thousand of the hyper-point data. The training is stopped when the output structure reflects the approximate geometric characteristics of the set of hyper-points.
The 3.6 step matches the relevant descriptors;
and the vectors output by the deep learning network represent the characteristic vectors of the point cloud sphere, the next step is to select similar super point set groups for matching, the vector sets are used for matching, and the vector set of the small-scale local point cloud system is matched with the super point set of the large-scale global point cloud according to the calculated Euclidean distance.
The 3.7 step uses local search for rough registration;
and matching the vector set of the small-scale local point cloud system with the super point set of the large-scale global point cloud, roughly registering by adopting local search, taking the registered point cloud with the Euclidean distance smaller than a certain threshold value from the vector machine of the small-scale local point cloud in the large-scale global point cloud as the point cloud to be registered, and recording the position coordinates, the translation matrix and the deflection matrix of the point cloud to be registered.
Advantageous effects
According to the unmanned vehicle auxiliary positioning method based on point cloud data registration, firstly, an unmanned vehicle needs to drive in a specified outdoor scene in advance, and a global point cloud data map is obtained. In the driving process of the unmanned vehicle, the point cloud data of the position where the unmanned vehicle is located is scanned through the laser radar, local small-scale point clouds can be obtained, registration is carried out through the global point cloud data and the local small-scale point cloud data, a registered translation matrix and a registered rotation matrix and the position of the small-scale point clouds in the large-scale point clouds are obtained, and then the positioning information of the unmanned vehicle is calculated. The point cloud registration of the invention utilizes the set of multi-scale spheres and the depth self-coding technology, thereby realizing the rough registration of the point cloud outdoors; and when the registration error is smaller than a certain threshold value, the fine registration of the point cloud is realized by adopting an ICP (inductively coupled plasma) algorithm. The invention can assist GNSS signals, realize more accurate positioning of the unmanned vehicle, and has the advantages of real-time performance and small calculated amount.
The technology can obtain real-time positioning information without a satellite positioning system such as a GPS and the like and only by depending on laser point cloud data. The technology also improves the accuracy and reliability of unmanned vehicle positioning as an auxiliary positioning technology.
Drawings
FIG. 1 is a schematic diagram of an assisted positioning scenario of the present invention;
FIG. 2 is a flow chart of the point cloud registration process of the present invention;
FIG. 3 is a diagram of a neural network for depth map coding in accordance with the present invention;
FIG. 4 is a point cloud depth map of the plane of the present invention.
Detailed description of the invention
In order to make the purpose, technical solution and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
According to the method, a point cloud registration technology is utilized, the position coordinates, the rotation matrix and the translation matrix of the small-scale point cloud in the large-scale point cloud are obtained through registration of the large-scale outdoor point cloud and the local small-scale point cloud, and accurate positioning information of the unmanned vehicle is further obtained through calculation. Before registration, the unmanned vehicle needs to acquire global large-scale point cloud data in an outdoor scene in advance and form a high-precision point cloud map. After the high-precision point cloud map is obtained, small-scale point cloud data around the unmanned vehicle can be collected in real time in the driving process of the unmanned vehicle, and then the position information, the translation matrix and the rotation matrix of the small-scale point cloud data in the large-scale global point cloud are solved through a registration algorithm, so that the positioning information of the unmanned vehicle is obtained.
1. The algorithm platform of the invention is nvidia Drive Px2. The platform builds a coffee environment, cuda and cudnn accelerated processing effects, and realizes a real-time positioning effect. An unmanned vehicle auxiliary positioning method based on point cloud data registration comprises the following steps:
firstly, collecting a panoramic point cloud data map, namely large-scale global point cloud data, in a specified outdoor scene by an unmanned vehicle in advance;
scanning point cloud data of the position, namely small-scale local point cloud, by using a laser radar in the driving process of the unmanned vehicle;
registering the small-scale local point cloud and the large-scale global point cloud by using a point cloud data registration algorithm, and acquiring a registration position coordinate, a deflection matrix and a translation matrix of the small-scale local point cloud in the large-scale global point cloud;
and step four, calculating the position of the unmanned vehicle in the large-scale global point cloud map by means of the registration position coordinates and the deflection and translation matrix.
Referring to fig. 1 to 4, in the coarse registration stage of the point cloud registration, a multi-scale sphere is used to replace the conventional key points and local linear structures as the most basic units for matching; coding a local three-dimensional geometrical structure by using a deep neural encoder, and further acquiring position coordinates which are possibly registered with small-scale local point clouds in large-scale global point clouds, and a deflection and translation matrix through rough registration; the coarse registration process for achieving point cloud registration is as follows:
the method comprises the following steps:
1. marking out a super point from the point cloud by using a novel random ball covering set algorithm;
2. selecting a normalized local coordinate system for each of the super points;
3. projecting the over-point data onto a two-dimensional map;
4. carrying out significance detection and screening on the super points;
5. reducing the dimension through an automatic encoder of a deep neural network;
6. matching the relevant descriptors;
7. carrying out rough registration by utilizing local search;
8. and performing accurate adjustment by using an iteration closest point method.
The first step of covering the small-scale local point cloud and the large-scale global point cloud by using the multi-scale random ball comprises the following steps: setting the radius of the multi-scale sphere according to the point cloud density: r1, R2 and R3 randomly select a point P which does not belong to any multi-scale sphere; taking P as the center, points in the sphere of R1 are all defined as a new sphere; continuously circulating, selecting a new radius after covering complete point cloud data, and repeating the covering step again; under the same radius, crossed or covered spheres are screened out, and a multi-scale sphere set with three radii is established.
The second step selects a normalized local coordinate system for each of the over-points. And establishing a coordinate system of the super point set by using a singular value decomposition method for the pre-estimated covariance matrix of the points in the super point set range by taking the mass center of the super point set as an origin. The surface feature value contains two large and a slightly smaller feature vectors of similar size, with points clustered primarily in two directions and the z-axis being the third feature vector. The average height of the discrete mirror slices of the hyper-point set is calculated and inserted into the polar histogram, and the x-axis is the direction of the maximum beam.
And thirdly, pairing the super point set after establishing a local coordinate system, but the result is influenced by the density of points and random noise points. At this time, dimension reduction is performed. Converting position data of successive points to [ d ] im1 ,d im1 ]In the form of a discrete image of size, the Z-axis height of each point is projected onto the depth map of each corresponding pixel, and the set of hyper-points is scaled to d im1 Removing the arc-shaped edge of the super-point set in the depth map to scale the image to [ d ] im2 ,d im2 ]Size. The hyper-points are projected as a depth map. The next step is to apply maximum filtering and mean filtering to the image to reduce the effects of noise and different densities. The reconstructed image well keeps the geometric characteristics and quality of the original super-point set and also completely covers an unknown sparse area.
The fourth step is to perform significance detection and screening. In this step, irrelevant superpoints will be filtered out by the three criteria of density, geometry and level of significance. (1) The density of the test is not only determined by absolute values but also compared, less than N d The hyper-points of points are filtered out, and a hyper-point set with relatively few points will be filtered out if compared to their K-nearest hyper-point set. (2) By detecting the geometric characteristics, the reaction flat surface will be filtered out of the high point, i.e. the low height point. (3) Performing significance detection, namely remolding the over-point depth map into a line with the length d 2 im2 The "depth vector" of (1).
The fifth step is depth auto-encoder dimension reduction network (DAE), applying DAE neural network can achieve the most advanced image compression, where we use this technique to obtain a compact representation of the 2.5 dimensional hyper-point geometry in the depth map. The encoder, starting from the input layer and connecting to the hidden layer, where the dimensionality is gradually reduced until the desired compact dimensionality is reached, and the decoder are the two most important components of the DAE neural network; the decoder starts with a compact representation of the data and the hidden layer dimension gradually increases until the output layer is the same as the input layer.
The DAE network consists of four fully connected hidden layers, between each of which a sigmoid nonlinear activation function is applied, to the input layer a Dropout (DO) algorithm is applied. The fourth and first, and third and second hidden layers each form mirror symmetry and are of the same height. The dimensions of the structure application are as follows: 1032 (Input), [1032,128] (L1), [128,10] (L2), [10,128] (L3), [128,1032] (L4), 1032 (output). This DAE network structure is initially trained using a depth map containing one hundred thousand of the over-point data.
The invention can be applied to DAE networks, first an input depth map, then reduced to a compact representation in 10 dimensions, and then reconstructed by a decoder into the original dimensions, where color represents the height in the depth map, blue represents height 0, and red represents the highest height, although the reconstructed image is not exactly congruent to the input image, but has reflected the approximate geometry of the set of hyper-points.
And after describing the SAF vector output by each super point set through the DAE network, selecting a similar super point set group to pair, and according to the Euclidean distance between measured SAF characteristics, pairing each super point set in the local point cloud system with the super point set with the nearest K value in the whole point cloud system, wherein K is equal to 3. And when the distance between the current super point set and the i +1 nearest super point sets is obviously larger than the i nearest super point sets, screening the i +1 to K. And (3) grouping vectors of the reduced spheres to establish a vector set, pairing by using the vector set, and pairing the vector set of the small-scale local point cloud system and the super point set of the large-scale global point cloud according to the calculated Euclidean distance.
The seventh step performs coarse registration by local search. In the method, coarse registration is carried out by using a method of combining a multi-scale sphere and deep learning self-coding, and 6 candidate pairs are iteratively selected, changed in a calculation manner and consistency check is carried out by measuring the physical average distance between a conversion point in a local point cloud system and the nearest super point set of the conversion point in the whole point cloud. We randomly detected 1 ten thousand objects. In areas where the local point cloud systems do not overlap, we record the 5 best point cloud changes (T1 \8230; T5) instead of just selecting the point cloud transform that gives the best results for the coarse registration. And after accurate registration, the point cloud with the best performance is kept as a final result. And matching the vector set of the small-scale local point cloud system with the super point set of the large-scale global point cloud, roughly registering by adopting local search, taking the registered point cloud with the Euclidean distance smaller than a certain threshold value from the vector machine of the small-scale local point cloud in the large-scale global point cloud as the point cloud to be registered, and recording the position coordinates, the translation matrix and the deflection matrix of the point cloud to be registered.
And step eight, carrying out fine adjustment by using an Iterative Closest Point (ICP) method, initializing by using the cloud change of 5 points selected in the previous step, and selecting registration with the lowest ICP loss as the output point cloud change of the method. And in the fine registration stage, an ICP (inductively coupled plasma) algorithm is used, further optimization is performed on the basis of the translation matrix and the deflection matrix of the coarse registration, and the translation matrix and the deflection matrix which are matched more accurately are obtained.
The reason for fine adjustment by using the step is that the rough registration result is not the correct registration result, and the ninth step queries the point cloud map through the registration result to acquire the positioning information of the unmanned vehicle. And (4) registering the coordinates of the point cloud in the translation matrix and the rotation matrix of the point cloud registration obtained by fine registration and the large-scale global point cloud, and calculating the position of the unmanned vehicle in the large-scale global point cloud map.
For the present example, real-time positioning is achieved for unmanned vehicles without repeating the outdoor environment. The unmanned vehicle in the example realizes automatic positioning of the north campus of Tianjin university, and the positioning is correct and the precision is within the range of 1 meter.
According to the unmanned vehicle auxiliary positioning method based on point cloud data registration, firstly, an unmanned vehicle needs to drive in a specified outdoor scene in advance, and a global point cloud data map is obtained. In the driving process of the unmanned vehicle, the point cloud data of the position where the unmanned vehicle is located is scanned through the laser radar, local small-scale point clouds can be obtained, registration is carried out through the global point cloud data and the local small-scale point cloud data, a registered translation matrix and a registered rotation matrix and the position of the small-scale point clouds in the large-scale point clouds are obtained, and then the positioning information of the unmanned vehicle is calculated. The technology can obtain real-time positioning information without a satellite positioning system such as a GPS and the like and only by depending on laser point cloud data. The technology also improves the accuracy and reliability of unmanned vehicle positioning as an auxiliary positioning technology. Experiments in a park show that the real-time performance of the method is met, the method has high precision, and the error can be controlled within 1 meter. In multiple tests, accurate positioning is realized.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, but rather as the subject matter of any modification, equivalent arrangement, or improvement made within the spirit and principle of the present invention is included in the scope of the present invention.

Claims (9)

1. An unmanned vehicle auxiliary positioning method based on point cloud data registration is characterized by comprising the following steps:
firstly, collecting a panoramic point cloud data map, namely large-scale global point cloud data, in a specified outdoor scene by an unmanned vehicle in advance;
scanning point cloud data of the position, namely small-scale local point cloud, by using a laser radar in the driving process of the unmanned vehicle;
registering the small-scale local point cloud and the large-scale global point cloud by using a point cloud data registration algorithm, and acquiring a registration position coordinate, a deflection matrix and a translation matrix of the small-scale local point cloud in the large-scale global point cloud; wherein:
the registration of the small-scale local point cloud and the large-scale global point cloud in the third step comprises the following steps:
coarse registration: a multi-scale sphere is used for replacing a traditional key point and a local linear structure to serve as the most basic unit for matching; encoding the local three-dimensional geometric structure by using a depth neural encoder; acquiring position coordinates which can be registered with small-scale local point clouds in large-scale global point clouds and deflection and translation matrixes through coarse registration;
fine registration: on the basis of the position coordinates obtained by coarse registration and a deflection and translation matrix, an ICP fine registration algorithm is adopted to realize accurate registration of the two point cloud data;
and step four, calculating the position of the unmanned vehicle in the large-scale global point cloud map by means of the registration position coordinates and the deflection and translation matrix.
2. The unmanned vehicle-assisted positioning method based on point cloud data registration of claim 1, wherein the coarse registration stage of the point cloud registration is to use a multi-scale sphere to replace traditional key points and local linear structures as the most basic unit of matching; coding a local three-dimensional geometrical structure by using a deep neural encoder, and further acquiring position coordinates which are possibly registered with small-scale local point clouds in large-scale global point clouds, and a deflection and translation matrix through rough registration; the coarse registration process for achieving point cloud registration is as follows:
3.1, covering small-scale local point clouds and large-scale global point clouds by using multi-scale random spheres;
3.2 selecting a normalized local coordinate system for each multi-scale sphere;
3.3 projecting the multi-scale sphere data onto the two-dimensional graph;
3.4, carrying out significance detection and screening on the multi-scale spheres;
3.5, reducing the dimension of the multi-scale sphere by a deep neural network automatic encoder;
3.6 matching the related descriptors;
3.7 coarse registration with local search.
3. The unmanned vehicle-assisted positioning method based on point cloud data registration as claimed in claim 2, wherein the 3.1 step covers small-scale local point cloud and large-scale global point cloud processes with multi-scale random spheres:
4.1, setting the radius of the multi-scale sphere according to the point cloud density: r1, R2 and R3;
4.2 randomly selecting points P which do not belong to any multi-scale sphere;
4.3 with P as the center, points in the sphere of R1 are all defined as a new sphere;
4.4, continuously circulating, selecting a new radius after covering complete point cloud data, and repeating the covering step again;
4.5 under the same radius, screening out crossed or covered spheres and establishing a multi-scale sphere set with three radii.
4. The method of unmanned vehicle-assisted positioning based on point cloud data registration of claim 2, wherein the 3.2 step selects a normalized local coordinate system process for each sphere:
5.1 establishing a coordinate system of a super-point set by using a singular value decomposition method for the pre-estimated covariance matrix of the points in the sphere range by taking the center of mass of the sphere as an origin;
5.2 the surface feature value contains two similarly sized large and one slightly smaller feature vector, the points are mainly clustered in two directions, the z-axis is the third feature vector;
5.3 calculate the average height of the discrete mirror slices of the sphere and insert them into the polar histogram, the x-axis is the direction of maximum beam.
5. The unmanned vehicle assisted localization method based on point cloud data registration of claim 2, wherein the 3.3 step of projecting the multi-scale sphere data onto the two-dimensional map comprises the following steps:
6.1, converting the point cloud data in the sphere into a square discrete image form with side length of d;
6.2 projecting the Z-axis height of each point onto the depth map of each corresponding pixel;
6.3 zooming the sphere to a cube image with side length d 1;
6.4 removing the arc-shaped edge of the sphere in the depth map scales the image into a square depth map with side length d.
6. The unmanned vehicle assisted localization method based on point cloud data registration of claim 2, wherein the 3.4 steps of saliency detection and screening process of the multi-scale sphere mapped depth map:
7.1 detection density, spheres less than N points are filtered out;
7.2 detecting the geometrical characteristics, filtering out the over-point of the reaction flat surface, namely the over-point with lower height;
7.3 significance testing, remolding the super-point depth map into a row of depth vectors with the length of d x d;
principal Component Analysis (PCA) is performed on the set of depth vectors, and a set of outliers that apply only the original three vectors will be filtered out.
7. The unmanned vehicle assisted localization method based on point cloud data registration of claim 2, wherein 3.5 is to perform dimensionality reduction process on a multi-scale sphere through a deep neural network automatic encoder;
8.1 the deep learning neural network can obtain the most advanced image compression and is divided into an encoder and a decoder; the encoder starts with the input layer and connects to the hidden layer, where the dimensionality is gradually reduced until the desired compact dimensionality is reached; the decoder starts from compact representation of data, and the hidden layer dimension is gradually increased until the output layer is the same as the input layer;
8.2, the dimensionality of the deep learning network is as follows: 1032 (Input), [1032,128] (L1), [128,10] (L2), [10,128] (L3), [128,1032] (L4), 1032 (output); firstly, initially training the network structure by using a depth map containing hundreds of thousands of over-point data; the training is stopped when the output structure reflects the approximate geometric characteristics of the set of hyper-points.
8. The method of unmanned vehicle-assisted positioning based on point cloud data registration of claim 2, wherein the 3.6 steps are performed with a matching process for the relevant descriptors; and the vectors output by the deep learning network represent the characteristic vectors of the point cloud sphere, the next step is to select similar super point set groups for matching, the vector sets are used for matching, and the vector set of the small-scale local point cloud system is matched with the super point set of the large-scale global point cloud according to the calculated Euclidean distance.
9. The unmanned vehicle-assisted positioning method based on point cloud data registration of claim 2, wherein the 3.7 uses local search for rough registration;
matching a vector set of the small-scale local point cloud system with a super point set of the large-scale global point cloud, performing rough registration by using local search, taking a registration point cloud in the large-scale global point cloud, the Euclidean distance between the registration point cloud and a vector machine of the small-scale local point cloud is smaller than a certain threshold value, as a point cloud to be registered, and recording the position coordinates, the translation matrix and the deflection matrix of the point cloud to be registered.
CN201910020386.5A 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration Active CN109887028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910020386.5A CN109887028B (en) 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910020386.5A CN109887028B (en) 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration

Publications (2)

Publication Number Publication Date
CN109887028A CN109887028A (en) 2019-06-14
CN109887028B true CN109887028B (en) 2023-02-03

Family

ID=66925725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910020386.5A Active CN109887028B (en) 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration

Country Status (1)

Country Link
CN (1) CN109887028B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287873B (en) * 2019-06-25 2021-06-29 清华大学深圳研究生院 Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment
CN112393735B (en) * 2019-08-15 2023-05-30 纳恩博(北京)科技有限公司 Positioning method and device, storage medium and electronic device
WO2021046829A1 (en) * 2019-09-12 2021-03-18 华为技术有限公司 Positioning method, device and system
CN110660103B (en) * 2019-09-17 2020-12-25 北京三快在线科技有限公司 Unmanned vehicle positioning method and device
CN111009002B (en) * 2019-10-16 2020-11-06 贝壳找房(北京)科技有限公司 Point cloud registration detection method and device, electronic equipment and storage medium
CN110827302A (en) * 2019-11-14 2020-02-21 中南大学 Point cloud target extraction method and device based on depth map convolutional network
CN111161412B (en) * 2019-12-06 2024-02-09 苏州艾吉威机器人有限公司 Three-dimensional laser mapping method and system
CN111220993B (en) * 2020-01-14 2020-07-28 长沙智能驾驶研究院有限公司 Target scene positioning method and device, computer equipment and storage medium
CN111553343B (en) * 2020-04-01 2023-04-25 青岛联合创智科技有限公司 Extraction method of laser point cloud characteristics
CN111812658B (en) * 2020-07-09 2021-11-02 北京京东乾石科技有限公司 Position determination method, device, system and computer readable storage medium
CN112835085B (en) * 2020-07-09 2022-04-12 北京京东乾石科技有限公司 Method and device for determining vehicle position
CN112835086B (en) * 2020-07-09 2022-01-28 北京京东乾石科技有限公司 Method and device for determining vehicle position
CN111968179B (en) * 2020-08-13 2022-07-19 厦门大学 Automatic driving vehicle positioning method for underground parking garage
CN112581515B (en) * 2020-11-13 2022-12-13 上海交通大学 Outdoor scene point cloud registration method based on graph neural network
CN112762824B (en) * 2020-12-24 2022-04-22 中南大学 Unmanned vehicle positioning method and system
CN113192197B (en) * 2021-05-24 2024-04-05 北京京东乾石科技有限公司 Global point cloud map construction method, device, equipment and storage medium
CN115248430B (en) * 2021-09-23 2023-08-25 上海仙途智能科技有限公司 Target object positioning method, device, terminal and medium
CN115235477A (en) * 2021-11-30 2022-10-25 上海仙途智能科技有限公司 Vehicle positioning inspection method and device, storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN106296693A (en) * 2016-08-12 2017-01-04 浙江工业大学 Based on 3D point cloud FPFH feature real-time three-dimensional space-location method
CN108226938A (en) * 2017-12-08 2018-06-29 华南理工大学 A kind of alignment system and method for AGV trolleies
CN108868268A (en) * 2018-06-05 2018-11-23 西安交通大学 Based on point to identity distance from the unmanned vehicle position and orientation estimation method being registrated with cross-correlation entropy
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
EP3333538B1 (en) * 2016-12-07 2020-09-09 Hexagon Technology Center GmbH Scanner vis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN106296693A (en) * 2016-08-12 2017-01-04 浙江工业大学 Based on 3D point cloud FPFH feature real-time three-dimensional space-location method
CN108226938A (en) * 2017-12-08 2018-06-29 华南理工大学 A kind of alignment system and method for AGV trolleies
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN108868268A (en) * 2018-06-05 2018-11-23 西安交通大学 Based on point to identity distance from the unmanned vehicle position and orientation estimation method being registrated with cross-correlation entropy

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Robust Vehicle Environment Reconstruction from Point Clouds For Irregularity Detection;Ahmed C.Sidiya;《2017 IEEE Intelligent Vehicles Symposium》;20170731;第1085-1092页 *
基于改进ICP算法的点云自动配准技术;钟莹;《控制工程》;20140131;第37-40页 *
基于高斯混合模型-地球移动;李亮;《华中科技大学学报(自然科学版)》;20171031;第65-69页 *

Also Published As

Publication number Publication date
CN109887028A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN109887028B (en) Unmanned vehicle auxiliary positioning method based on point cloud data registration
Riegler et al. Octnetfusion: Learning depth fusion from data
CN110533722B (en) Robot rapid repositioning method and system based on visual dictionary
CN110009674B (en) Monocular image depth of field real-time calculation method based on unsupervised depth learning
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN110111345B (en) Attention network-based 3D point cloud segmentation method
CN111915677A (en) Ship pose estimation method based on three-dimensional point cloud characteristics
CN110930452A (en) Object pose estimation method based on self-supervision learning and template matching
CN116222577B (en) Closed loop detection method, training method, system, electronic equipment and storage medium
Nagy et al. Online targetless end-to-end camera-LiDAR self-calibration
CN111798453A (en) Point cloud registration method and system for unmanned auxiliary positioning
Huang et al. A coarse-to-fine algorithm for registration in 3D street-view cross-source point clouds
CN115272616A (en) Indoor scene three-dimensional reconstruction method, system, device and storage medium
CN113177593A (en) Fusion method of radar point cloud and image data in water traffic environment
CN112905828A (en) Image retriever, database and retrieval method combined with significant features
CN110363863B (en) Input data generation method and system of neural network
CN116704304A (en) Multi-mode fusion target detection method of mixed attention mechanism
Wang et al. Absolute depth measurement of objects based on monocular vision
Mharolkar et al. RGBDTCalibNet: End-to-end Online Extrinsic Calibration between a 3D LiDAR, an RGB Camera and a Thermal Camera
Hyeon et al. KR-Net: A dependable visual kidnap recovery network for indoor spaces
CN115267724B (en) Position re-identification method of mobile robot capable of estimating pose based on laser radar
Li et al. Feature point extraction and tracking based on a local adaptive threshold
CN116385292A (en) IMU-assisted LiDAR dynamic point cloud eliminating method
CN115601423A (en) Edge enhancement-based round hole pose measurement method in binocular vision scene
CN115984443A (en) Space satellite target image simulation method of visible light camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant