CN109887028A - A kind of unmanned vehicle assisted location method based on cloud data registration - Google Patents

A kind of unmanned vehicle assisted location method based on cloud data registration Download PDF

Info

Publication number
CN109887028A
CN109887028A CN201910020386.5A CN201910020386A CN109887028A CN 109887028 A CN109887028 A CN 109887028A CN 201910020386 A CN201910020386 A CN 201910020386A CN 109887028 A CN109887028 A CN 109887028A
Authority
CN
China
Prior art keywords
registration
point cloud
cloud
sphere
unmanned vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910020386.5A
Other languages
Chinese (zh)
Other versions
CN109887028B (en
Inventor
谢宗霞
王而川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910020386.5A priority Critical patent/CN109887028B/en
Publication of CN109887028A publication Critical patent/CN109887028A/en
Application granted granted Critical
Publication of CN109887028B publication Critical patent/CN109887028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present invention provides a kind of unmanned vehicle assisted location method based on cloud data registration, comprising steps of unmanned vehicle is in advance in the point cloud data map of specified outdoor scene acquisition panorama, i.e., extensive global point cloud data;In unmanned vehicle driving process, pass through the point cloud data of laser radar scanning position, i.e. small-scale partial points cloud, and utilize the registration Algorithm of point cloud data, small-scale partial points cloud and extensive global point cloud are registrated, and registration position coordinate of the small-scale partial points cloud in extensive global point cloud is obtained, and deflection and translation matrix;Finally, being calculated in extensive global point cloud map by registration position coordinate and deflection and translation matrix, the location of no vehicle people;The present invention mainly utilizes the laser point cloud data of unmanned vehicle, the registration of current small-scale partial points cloud and extensive global point cloud is realized, to realize positioning.

Description

A kind of unmanned vehicle assisted location method based on cloud data registration
Technical field
The present invention relates to unmanned vehicle location technologies, and in particular to be it is a kind of based on laser point cloud data registration unmanned vehicle Assisted location method is mainly used for realizing that relying on laser point cloud data realizes positioning in real time, can be real in the area for being detached from GPS Now position.
Background technique
In recent years, with the rapid development of artificial intelligence, unmanned vehicle, automatic Pilot have obtained great concern, industry Development is swift and violent, and at the same time, the positioning of unmanned vehicle is also particularly important.In certain scenes, GPS signal is weak or disappears, and In the case of other, how to realize that relying on laser point cloud realizes location technology, becomes extremely important.
Summary of the invention
For this purpose, the side for the unmanned vehicle positioning that the purpose of the present invention is to provide a kind of based on laser point cloud data registration Algorithm Method.Localization method of the present invention can be detached from the sensors such as GPS, only rely only on laser point cloud data, realize precise positioning.
It is realized when the purpose of the present invention by technical solution once.
A kind of unmanned vehicle assisted location method based on cloud data registration, comprising steps of
Step 1, unmanned vehicle are in advance in the point cloud data map of specified outdoor scene acquisition panorama, i.e., extensive global Point cloud data;
Step 2, in unmanned vehicle driving process, by the point cloud data of laser radar scanning position, i.e., on a small scale Partial points cloud;
Step 3 matches small-scale partial points cloud and extensive global point cloud using the registration Algorithm of point cloud data Standard, and registration position coordinate of the small-scale partial points cloud in extensive global point cloud is obtained, and deflection and translation matrix;
Step 4 is calculated by registration position coordinate and deflection and translation matrix in extensive global point cloud map In, the position where unmanned vehicle.
The step 3 middle and small scale partial points cloud and extensive global point cloud carry out registration
Rough registration: replace traditional key point and local linear structure as matched most basic list using multiple dimensioned sphere Member;Local three-dimensional geometrical structure is encoded using depth neural coding device.Extensive global point cloud is obtained by rough registration It is middle may be with the position coordinates of small-scale local point cloud registering, and deflection and translation matrix
Thin registration: it on the basis of the position coordinates that rough registration obtains, and deflection and translation matrix, is carefully registrated using ICP Algorithm realizes the accurate registration of two point cloud datas.
The rough registration stage of the point cloud registering is that traditional key point and local linear structure are replaced using multiple dimensioned sphere As matched most basic unit;Local three-dimensional geometrical structure is encoded using depth neural coding device, and then by thick Registration obtains may be with the position coordinates of small-scale local point cloud registering, and deflection and translation square in extensive global point cloud Battle array;It is the rough registration process for realizing point cloud registering as follows:
3.1 cover small-scale partial points cloud and extensive global point cloud using multiple dimensioned random ball;
3.2 select a normalization local coordinate system for each multiple dimensioned sphere;
It 3.3. will be on multiple dimensioned sphere data projection to X-Y scheme;
3.4. conspicuousness detection and screening are carried out to multiple dimensioned sphere;
3.5 carry out dimensionality reduction to multiple dimensioned sphere by deep neural network autocoder;
3.6 pairs of relevant descriptors match;
3.7 are registrated roughly using local search.
Small-scale partial points cloud and extensive global point cloud mistake are covered using multiple dimensioned random ball in 3.1 step Journey::
4.1 set the radius of multiple dimensioned sphere: R1, R2, R3 according to cloud density
4.2 random selections are not belonging to the point P of any one multiple dimensioned sphere;
4.3 centered on P, and the point in the spherical shape of R1 is all defined as a new sphere;
4.4 constantly circulations select new radius, are repeated covering step after having covered whole point cloud datas;
4.5, with the sphere under Radius, screening out intersection or covering, establish the multiple dimensioned sphere collection of three radiuses.
3.2 step is that each sphere selects a normalization local coordinate system process:
5.1 using the mass center of sphere as origin, uses singular value decomposition to the covariance matrix of estimating of the point in sphere Method sets up the coordinate system of overtrick collection;
5.2 surface characteristics values include a big and more slightly smaller feature vector for two similar sizes, are put main poly- In two directions, z-axis is third feature vector to collection;
The average height of the discrete mirror image slice of 5.3 calculating spheres is simultaneously inserted into polar coordinates histogram, and x-axis is exactly maximum beam Direction.
By process on multiple dimensioned sphere data projection to X-Y scheme in 3.3 step:
6.1 are converted into ball intracorporal point cloud data the square discrete picture form that side length is d;
The Z axis height of 6.2 each points projects on the depth map of each respective pixel;
6.3 spheres zoom to the square image that side length is d1;
6.4 remove in the square depth map that image scaling to side length is d by curved edge of the sphere in depth map.
Depth map after 3.4 step maps multiple dimensioned sphere carries out conspicuousness detection and screening process:
7.1 detection densities, it is to be filtered less than the sphere of N number of point fall;
7.2 detection geometrical properties, the overtrick for reacting flat surfaces will be filtered, i.e., a lower height of overtrick;
The detection of 7.3 conspicuousnesses, " the depth vector " for being d*d by overtrick depth map remodeling Cheng Yilie length;
Principal component analysis (PCA) is carried out to depth set of vectors, will only be filtered using the overtrick collection of original three vectors Fall.
3.5 step carries out dimensionality reduction to multiple dimensioned sphere by deep neural network autocoder;
8.1 deep learning neural networks can obtain state-of-the-art compression of images, be divided into encoder and decoder.Encoder By input layer and it is connected to hidden layer, gradually decreases dimension herein until reaching required compact dimension;Decoder starts In the compact representation of data, hidden layer dimension is gradually increased, until output layer is identical as input layer.
8.2, the dimension of deep learning network is as follows: 1032 (Input), [1032,128] (L1), [128,10] (L2), [10,128](L3),[128,1032](L4),1032(output).First with the depth map comprising 100,000 overtrick data to this One network structure carries out initial training.When export structure has reflected the substantially geometrical property deconditioning of overtrick collection.
3.6 steps match relevant descriptor;
The vector exported by deep learning network represents the feature vector of point cloud sphere, be exactly in next step choose it is similar Overtrick collection forms a team to be matched, and is matched using vector set, according to the Euclidean distance of calculating, small-scale partial points cloud system The vector set of system and the overtrick collection of extensive global point cloud are matched.
3.7 step is registrated roughly using local search;
The vector set of small-scale partial points cloud system and the overtrick collection of extensive global point cloud are carried out pairing and are searched using part Suo Jinhang is registrated roughly, and is less than to the Euclidean distance in extensive global point cloud with the vector machine of small-scale partial points cloud The registration point cloud of certain threshold value records the position coordinates and translation matrix and partially of subject to registration cloud as subject to registration cloud Torque battle array.
Beneficial effect
Unmanned vehicle assisted location method provided by the invention based on cloud data registration, first unmanned vehicle needs are referring in advance Fixed outdoor scene traveling, obtains global point cloud data map.In unmanned vehicle driving process, pass through laser radar scanning institute Point cloud data in position can obtain small-scale local cloud, pass through global point cloud data and localized fine scale point cloud number According to being registrated, the translation matrix and spin matrix of registration, and position of the small-scale point cloud in large-scale point cloud are obtained, and The location information of unmanned vehicle is resolved by calculating afterwards.The set and depth that multiple dimensioned sphere is utilized in point cloud registering in invention are certainly Coding techniques, to realize outdoor point cloud rough registration;When being registrated to error less than certain threshold value, realized using ICP algorithm Point cloud is carefully registrated.The present invention can realize that unmanned vehicle is more accurately positioned with assisted GNSS signal, furthermore it is of the invention have it is real-time Property, the small advantage of calculation amount.
The global position systems such as GPS, only rely only on laser point cloud data since this technology does not need, and can be obtained in real time Location information.This technology also improves the accuracy and reliability of unmanned vehicle positioning as assistant positioning system.
Detailed description of the invention
Fig. 1 is the schematic diagram of auxiliary positioning scene of the present invention;
Fig. 2 is point cloud registering process flow diagram of the present invention;
Fig. 3 is the neural network diagram of depth map encoding of the present invention;
Fig. 4 is the point cloud depth map of plane of the present invention.
Specific implementation method
It is right with reference to the accompanying drawings and embodiments in order to make the purpose of the present invention, technical solution and a little be more clearly understood The present invention is described in further detail.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, It is not intended to limit the present invention.
The present invention utilizes point cloud registering technology, by the registration of extensive outdoor point cloud and localized fine scale point cloud, obtains Position coordinates and spin matrix and translation matrix of the small-scale point cloud in large-scale point cloud, further calculate acquisition unmanned vehicle Precise location information.Before registration, unmanned vehicle needs large-scale point cloud data global in outdoor scene acquisition in advance, and Form high-precision cloud map.After obtaining high-precision cloud map, unmanned vehicle in the process of moving, can acquire nothing in real time Small-scale point cloud data around people's vehicle solves small-scale point cloud data in extensive global point cloud then by registration Algorithm In location information and translation matrix and spin matrix, to obtain the location information of unmanned vehicle.
1, inventive algorithm platform is nvidia Drive Px2.Platform building caffe environment, cuda and cudnn Accelerate treatment effect, and realizes real-time locating effect.A kind of unmanned vehicle assisted location method based on cloud data registration, Comprising steps of
Step 1, unmanned vehicle are in advance in the point cloud data map of specified outdoor scene acquisition panorama, i.e., extensive global Point cloud data;
Step 2, in unmanned vehicle driving process, by the point cloud data of laser radar scanning position, i.e., on a small scale Partial points cloud;
Step 3 matches small-scale partial points cloud and extensive global point cloud using the registration Algorithm of point cloud data Standard, and registration position coordinate of the small-scale partial points cloud in extensive global point cloud is obtained, and deflection and translation matrix;
Step 4 is calculated by registration position coordinate and deflection and translation matrix in extensive global point cloud map In, the position where unmanned vehicle.
It please refers to shown in FIG. 1 to FIG. 4, the rough registration stage of the point cloud registering is to replace tradition to close using multiple dimensioned sphere Key point and local linear structure are as matched most basic unit;Using depth neural coding device to local three-dimensional geometrical structure into Row coding, so by rough registration obtain in extensive global point cloud may with the position coordinates of small-scale part point cloud registering, And deflection and translation matrix;It is the rough registration process for realizing point cloud registering as follows:
Comprising steps of
1. marking off overtrick from cloud using new-type random ball covering set algorithm;
2. selecting a normalization local coordinate system for each overtrick;
3. by overtrick data projection to X-Y scheme;
4. pair overtrick carries out conspicuousness detection and screening;
5. carrying out dimensionality reduction by deep neural network autocoder;
6. a pair relevant descriptor matches;
7. being registrated roughly using local search;
8. iteration closest approach method is accurately adjusted.
The first step covers small-scale partial points cloud using multiple dimensioned random ball and extensive global point cloud includes: according to point Cloud density sets the radius of multiple dimensioned sphere: R1, R2, R3 randomly choose the point P for being not belonging to any one multiple dimensioned sphere;With Centered on P, the point in the spherical shape of R1 is all defined as a new sphere;Constantly circulation, after having covered whole point cloud datas, selection New radius, is repeated covering step;With the sphere under Radius, screening out intersection or covering, three radiuses are established Multiple dimensioned sphere collection.
Second step is that each overtrick selects a normalization local coordinate system.The mass center integrated using overtrick is origin, to super The coordinate system of point within the scope of point set estimated covariance matrix and set up overtrick collection using singular value decomposition method.Surface characteristics value A big and more slightly smaller feature vector comprising two similar sizes, puts main aggregation in two directions, and z-axis is the Three feature vectors.It calculates the average height of the discrete mirror image slice of overtrick collection and is inserted into polar coordinates histogram, x-axis is exactly most The direction of big beam.
Third step can match overtrick collection after establishing local coordinate system, but this result will receive a little close The influence of degree and random noise.At this moment dimensionality reduction will be carried out to it.The position data of continuity point is converted into [dim1,dim1] Z axis height for the discrete picture form of size, each point projects on the depth map of each respective pixel, and overtrick collection zooms to dim1Picture size, remove curved edge of the overtrick collection in depth map for image scaling to [dim2,dim2] size.By overtrick It is projected to depth map.In next step to the maximum value filtering of image application and mean filter, the influence of noise and different densities is reduced.Weight The image built has been effectively maintained the geometrical characteristic and quality of original overtrick collection, and has still carried out to unknown sparse region complete All standing.
4th step is to carry out conspicuousness detection and screening.In this step, incoherent overtrick will pass through density, geometry Characteristic and significance these three standard filtrations fall.(1) detection density will not only measure absolute value and compare, Less than NdPoint overtrick it is to be filtered fall, if for the overtrick collection nearest compared to their K, possessing the overtrick collection of relatively fewer point It will be filtered.(2) geometrical property is detected, the overtrick for reacting flat surfaces will be filtered, i.e., a lower height of overtrick.(3) Overtrick depth map remodeling Cheng Yilie length is d by conspicuousness detection2 im2" depth vector ".
5th step deepness auto encoder dimensionality reduction network (DAE) can obtain state-of-the-art image using DAE neural network Compression, at this, we obtain the compact representation of the 2.5 dimension overtrick geometries in depth map using this technology.Encoder and Decoder is two main components of DAE neural network, and encoder is by input layer and is connected to hidden layer, herein gradually Dimension is reduced until reaching required compact dimension;Decoder starts from the compact representation of data, and hidden layer dimension is gradually increased, Until output layer is identical as input layer.
DAE network is made of four completely connected hidden layers, and S-shaped nonlinear activation letter is applied between each hidden layer Number, to input layer application Dropout (DO) algorithm.4th and first and third and each self-forming mirror image of second hidden layer Symmetrical and height is identical.The dimension of structure application is as follows: 1032 (Input), [1032,128] (L1), [128,10] (L2),[10,128](L3),[128,1032](L4),1032(output).First with the depth comprising 100,000 overtrick data Figure carries out initial training to this DAE network structure.
Present invention can apply to DAE networks, input depth map before this, are then reduced to the compact representation of 10 dimensions, then lead to It crosses decoder and is reconstructed into original dimension, color indicates that the height in depth map, blue represent height 0 herein, and red represents most High height, although the image and input picture non-correspondence rebuild, the substantially geometry for having reflected overtrick collection is special Property.
It is in next step exactly to choose after 6th step depicts the SAF vector that each overtrick collection is exported by DAE network Similar overtrick collection forms a team to be matched, according to the Euclidean distance between measurement SAF feature, each of partial points cloud system The nearest overtrick collection of K value in integrated therewith cloud system of overtrick collection is matched, and K is equal to 3 herein.When a recently with i+1 The distance of overtrick collection when being significantly greater than the nearest overtrick collection of i, screen out i+1 to K.To the vector of the sphere after dimensionality reduction, It carries out forming a team to establish vector set, be matched using vector set, according to the Euclidean distance of calculating, small-scale partial points cloud system The vector set of system and the overtrick collection of extensive global point cloud are matched.
7th step is registrated roughly by local search.We are encoded certainly using multiple dimensioned sphere and deep learning herein In conjunction with method carry out rough registration, pass through measurement partial points cloud system in inversion point and its integrally point cloud in nearest overtrick The physics average distance of collection, iteratively select 6 candidates to, calculate change and carry out consistency check.Our random detections 1 Ten thousand objects.In the nonoverlapping region of partial points cloud system, we have recorded 5 best point Yun Bianhua (T1 ... .T5) without It is the point Cloud transform for only selecting rough registration best result.In next step after accuracy registration, the point cloud put up the best performance will be as most Termination fruit remains.The vector set of small-scale partial points cloud system and the overtrick collection of extensive global point cloud carry out pairing use Local search is registrated roughly, and in extensive global point cloud with the Euclid of the vector machine of small-scale partial points cloud away from From the registration point cloud less than certain threshold value as subject to registration cloud, and the position coordinates of subject to registration cloud are recorded, and translation square Battle array and deflection matrix.
8th step is finely adjusted using iteration closest approach (ICP) method, and the 5 clouds variation chosen by previous step carries out initial Change, chooses ICP and lose the output point cloud variation that minimum registration is defined as this method.The thin registration stage uses ICP algorithm, thick On the basis of the translation matrix and deflection matrix of registration, advanced optimizes, obtain and match more accurate translation matrix and deflection Matrix.
Using the step for be finely adjusted the reason of be the result being registrated roughly be not necessarily all correct registration result, the 9th Step obtains the location information of unmanned vehicle by registration result, query point cloud map.The point cloud registering obtained using thin registration is put down It moves in matrix and spin matrix and extensive global point cloud, the coordinate of registration point cloud, calculates unmanned vehicle in extensive global point Position in cloud map.
For this example, for unmanned vehicle in the unduplicated situation of outdoor environment, real-time positioning is realized.This Unmanned vehicle in example realizes the automatic positioning in University Of Tianjin, campus, the northeast part of China, and positioning is correct and precision is within the scope of 1 meter.
Unmanned vehicle assisted location method provided by the invention based on cloud data registration, first unmanned vehicle needs are referring in advance Fixed outdoor scene traveling, obtains global point cloud data map.In unmanned vehicle driving process, pass through laser radar scanning institute Point cloud data in position can obtain small-scale local cloud, pass through global point cloud data and localized fine scale point cloud number According to being registrated, the translation matrix and spin matrix of registration, and position of the small-scale point cloud in large-scale point cloud are obtained, and The location information of unmanned vehicle is resolved by calculating afterwards.The global position systems such as GPS, only rely only on laser since this technology does not need Point cloud data can be obtained real-time location information.This technology also improves the standard of unmanned vehicle positioning as assistant positioning system Exactness and reliability.By testing in garden, the discovery present invention has while meeting real-time, has very high precision, Error can control within the scope of 1 meter.In test of many times, accurate positionin is realized.
Above tell a story is only presently preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc. within mind and principle, is to be included within the scope of the present invention.

Claims (10)

1. a kind of unmanned vehicle assisted location method based on cloud data registration, which is characterized in that comprising steps of step 1, nothing People's vehicle is in advance in the point cloud data map of specified outdoor scene acquisition panorama, i.e., extensive global point cloud data;
Step 2, in unmanned vehicle driving process, by the point cloud data of laser radar scanning position, i.e., small-scale part Point cloud;
Step 3 is registrated small-scale partial points cloud and extensive global point cloud using the registration Algorithm of point cloud data, and Obtain registration position coordinate of the small-scale partial points cloud in extensive global point cloud, and deflection and translation matrix;
Step 4 calculates in extensive global point cloud map, nothing by registration position coordinate and deflection and translation matrix Position where people's vehicle.
2. the unmanned vehicle assisted location method based on cloud data registration as described in claim 1, which is characterized in that the step Rapid three middle and small scales partial points cloud and extensive global point cloud carry out registration
Rough registration: replace traditional key point and local linear structure as matched most basic unit using multiple dimensioned sphere;Benefit Local three-dimensional geometrical structure is encoded with depth neural coding device.Being obtained by rough registration may in extensive global point cloud With the position coordinates of small-scale local point cloud registering, and deflection and translation matrix
Thin registration: on the basis of the position coordinates that rough registration obtains, and deflection and translation matrix, using the thin registration Algorithm of ICP, Realize the accurate registration of two point cloud datas.
3. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 2, which is characterized in that the point The rough registration stage of cloud registration is to replace traditional key point and local linear structure as matched most base using multiple dimensioned sphere This unit;Local three-dimensional geometrical structure is encoded using depth neural coding device, and then is obtained on a large scale by rough registration It may be with the position coordinates of small-scale local point cloud registering, and deflection and translation matrix in global point cloud;It is to realize point as follows The rough registration process of cloud registration:
3.1 cover small-scale partial points cloud and extensive global point cloud using multiple dimensioned random ball;
3.2 select a normalization local coordinate system for each multiple dimensioned sphere;
It 3.3. will be on multiple dimensioned sphere data projection to X-Y scheme;
3.4. conspicuousness detection and screening are carried out to multiple dimensioned sphere;
3.5 carry out dimensionality reduction to multiple dimensioned sphere by deep neural network autocoder;
3.6 pairs of relevant descriptors match;
3.7 are registrated roughly using local search.
4. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 3, which is characterized in that described Small-scale partial points cloud and extensive global point cloud process are covered using multiple dimensioned random ball in 3.1 steps::
4.1 set the radius of multiple dimensioned sphere: R1, R2, R3 according to cloud density
4.2 random selections are not belonging to the point P of any one multiple dimensioned sphere;
4.3 centered on P, and the point in the spherical shape of R1 is all defined as a new sphere;
4.4 constantly circulations select new radius, are repeated covering step after having covered whole point cloud datas;
4.5, with the sphere under Radius, screening out intersection or covering, establish the multiple dimensioned sphere collection of three radiuses.
5. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 3, which is characterized in that described 3.2 steps are that each sphere selects a normalization local coordinate system process:
5.1 using the mass center of sphere as origin, is built to the covariance matrix of estimating of the point in sphere using singular value decomposition method Erect the coordinate system of overtrick collection;
5.2 surface characteristics values include that big and a more slightly smaller feature vector, the point of two similar sizes are mainly gathered in In both direction, z-axis is third feature vector;
The average height of the discrete mirror image slice of 5.3 calculating spheres is simultaneously inserted into polar coordinates histogram, and x-axis is exactly the side of maximum beam To.
6. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 3, which is characterized in that described By process on multiple dimensioned sphere data projection to X-Y scheme in 3.3 steps:
6.1 are converted into ball intracorporal point cloud data the square discrete picture form that side length is d;
The Z axis height of 6.2 each points projects on the depth map of each respective pixel;
6.3 spheres zoom to the square image that side length is d1;
6.4 remove in the square depth map that image scaling to side length is d by curved edge of the sphere in depth map.
7. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 3, which is characterized in that described Depth map after 3.4 steps map multiple dimensioned sphere carries out conspicuousness detection and screening process:
7.1 detection densities, it is to be filtered less than the sphere of N number of point fall;
7.2 detection geometrical properties, the overtrick for reacting flat surfaces will be filtered, i.e., a lower height of overtrick;
The detection of 7.3 conspicuousnesses, " the depth vector " for being d*d by overtrick depth map remodeling Cheng Yilie length;
Principal component analysis (PCA) is carried out to depth set of vectors, will be only filtered using the overtrick collection of original three vectors.
8. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 3, which is characterized in that described 3.5 carry out reduction process to multiple dimensioned sphere by deep neural network autocoder;
8.1 deep learning neural networks can obtain state-of-the-art compression of images, be divided into encoder and decoder;Encoder is by defeated Enter layer and start and be connected to hidden layer, gradually decreases dimension herein until reaching required compact dimension;Decoder starts from counting According to compact representation, hidden layer dimension is gradually increased, until output layer is identical as input layer;
8.2, the dimension of deep learning network is as follows: 1032 (Input), [1032,128] (L1), [128,10] (L2), [10, 128](L3),[128,1032](L4),1032(output);First with the depth map comprising 100,000 overtrick data to this net Network structure carries out initial training;When export structure has reflected the substantially geometrical property deconditioning of overtrick collection.
9. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 3, which is characterized in that described 3.6 steps carry out matching process to relevant descriptor;The spy of point cloud sphere is represented by the vector that deep learning network exports Vector is levied, is in next step exactly to choose similar overtrick collection to form a team to be matched, is matched using vector set, according to the Europe of calculating Distance is obtained in several, the vector set of small-scale partial points cloud system is matched with the overtrick collection of extensive global point cloud.
10. the unmanned vehicle assisted location method based on cloud data registration as claimed in claim 3, which is characterized in that described 3.7 are registrated roughly using local search;
The vector set of small-scale partial points cloud system and the overtrick collection of extensive global point cloud carry out pairing using local search into The rough registration of row, and it is certain to being less than in extensive global point cloud with the Euclidean distance of the vector machine of small-scale partial points cloud The registration point cloud of threshold value records the position coordinates and translation matrix and moment of deflection of subject to registration cloud as subject to registration cloud Battle array.
CN201910020386.5A 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration Active CN109887028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910020386.5A CN109887028B (en) 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910020386.5A CN109887028B (en) 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration

Publications (2)

Publication Number Publication Date
CN109887028A true CN109887028A (en) 2019-06-14
CN109887028B CN109887028B (en) 2023-02-03

Family

ID=66925725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910020386.5A Active CN109887028B (en) 2019-01-09 2019-01-09 Unmanned vehicle auxiliary positioning method based on point cloud data registration

Country Status (1)

Country Link
CN (1) CN109887028B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN110660103A (en) * 2019-09-17 2020-01-07 北京三快在线科技有限公司 Unmanned vehicle positioning method and device
CN110827302A (en) * 2019-11-14 2020-02-21 中南大学 Point cloud target extraction method and device based on depth map convolutional network
CN111009002A (en) * 2019-10-16 2020-04-14 贝壳技术有限公司 Point cloud registration detection method and device, electronic equipment and storage medium
CN111220993A (en) * 2020-01-14 2020-06-02 长沙智能驾驶研究院有限公司 Target scene positioning method and device, computer equipment and storage medium
CN111553343A (en) * 2020-04-01 2020-08-18 青岛联合创智科技有限公司 Method for extracting laser point cloud characteristics
CN111798453A (en) * 2020-07-06 2020-10-20 博康智能信息技术有限公司 Point cloud registration method and system for unmanned auxiliary positioning
CN111812658A (en) * 2020-07-09 2020-10-23 北京京东乾石科技有限公司 Position determination method, device, system and computer readable storage medium
CN111968179A (en) * 2020-08-13 2020-11-20 厦门大学 Method for positioning automatic driving vehicle in underground parking lot
CN112393735A (en) * 2019-08-15 2021-02-23 纳恩博(北京)科技有限公司 Positioning method and device, storage medium and electronic device
WO2021046829A1 (en) * 2019-09-12 2021-03-18 华为技术有限公司 Positioning method, device and system
CN112581515A (en) * 2020-11-13 2021-03-30 上海交通大学 Outdoor scene point cloud registration method based on graph neural network
CN112762824A (en) * 2020-12-24 2021-05-07 中南大学 Unmanned vehicle positioning method and system
CN112835086A (en) * 2020-07-09 2021-05-25 北京京东乾石科技有限公司 Method and device for determining vehicle position
CN112835085A (en) * 2020-07-09 2021-05-25 北京京东乾石科技有限公司 Method and device for determining vehicle position
WO2021109167A1 (en) * 2019-12-06 2021-06-10 苏州艾吉威机器人有限公司 Three-dimensional laser mapping method and system
CN113192197A (en) * 2021-05-24 2021-07-30 北京京东乾石科技有限公司 Method, device, equipment and storage medium for constructing global point cloud map
CN115235477A (en) * 2021-11-30 2022-10-25 上海仙途智能科技有限公司 Vehicle positioning inspection method and device, storage medium and equipment
CN115248430A (en) * 2021-09-23 2022-10-28 上海仙途智能科技有限公司 Target object positioning method, device, terminal and medium
CN117095033A (en) * 2023-07-25 2023-11-21 重庆邮电大学 Multi-mode point cloud registration method based on image and geometric information guidance

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN106296693A (en) * 2016-08-12 2017-01-04 浙江工业大学 Based on 3D point cloud FPFH feature real-time three-dimensional space-location method
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
US20180158200A1 (en) * 2016-12-07 2018-06-07 Hexagon Technology Center Gmbh Scanner vis
CN108226938A (en) * 2017-12-08 2018-06-29 华南理工大学 A kind of alignment system and method for AGV trolleies
CN108868268A (en) * 2018-06-05 2018-11-23 西安交通大学 Based on point to identity distance from the unmanned vehicle position and orientation estimation method being registrated with cross-correlation entropy
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN106296693A (en) * 2016-08-12 2017-01-04 浙江工业大学 Based on 3D point cloud FPFH feature real-time three-dimensional space-location method
US20180158200A1 (en) * 2016-12-07 2018-06-07 Hexagon Technology Center Gmbh Scanner vis
CN108226938A (en) * 2017-12-08 2018-06-29 华南理工大学 A kind of alignment system and method for AGV trolleies
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN108868268A (en) * 2018-06-05 2018-11-23 西安交通大学 Based on point to identity distance from the unmanned vehicle position and orientation estimation method being registrated with cross-correlation entropy

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
AHMED C.SIDIYA: "Robust Vehicle Environment Reconstruction from Point Clouds For Irregularity Detection", 《2017 IEEE INTELLIGENT VEHICLES SYMPOSIUM》 *
李亮: "基于高斯混合模型-地球移动", 《华中科技大学学报(自然科学版)》 *
赵夫群等: "基于局部特征的点云配准算法", 《图学学报》 *
邹敏: "基于ICP算法的多视点云配准方法", 《科技创新与生产力》 *
钟莹: "基于改进ICP算法的点云自动配准技术", 《控制工程》 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN110287873B (en) * 2019-06-25 2021-06-29 清华大学深圳研究生院 Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment
CN112393735B (en) * 2019-08-15 2023-05-30 纳恩博(北京)科技有限公司 Positioning method and device, storage medium and electronic device
CN112393735A (en) * 2019-08-15 2021-02-23 纳恩博(北京)科技有限公司 Positioning method and device, storage medium and electronic device
WO2021046829A1 (en) * 2019-09-12 2021-03-18 华为技术有限公司 Positioning method, device and system
CN112639882A (en) * 2019-09-12 2021-04-09 华为技术有限公司 Positioning method, device and system
CN110660103A (en) * 2019-09-17 2020-01-07 北京三快在线科技有限公司 Unmanned vehicle positioning method and device
CN111009002A (en) * 2019-10-16 2020-04-14 贝壳技术有限公司 Point cloud registration detection method and device, electronic equipment and storage medium
CN110827302A (en) * 2019-11-14 2020-02-21 中南大学 Point cloud target extraction method and device based on depth map convolutional network
WO2021109167A1 (en) * 2019-12-06 2021-06-10 苏州艾吉威机器人有限公司 Three-dimensional laser mapping method and system
CN111220993A (en) * 2020-01-14 2020-06-02 长沙智能驾驶研究院有限公司 Target scene positioning method and device, computer equipment and storage medium
WO2021143778A1 (en) * 2020-01-14 2021-07-22 长沙智能驾驶研究院有限公司 Positioning method based on laser radar
CN111220993B (en) * 2020-01-14 2020-07-28 长沙智能驾驶研究院有限公司 Target scene positioning method and device, computer equipment and storage medium
CN111553343A (en) * 2020-04-01 2020-08-18 青岛联合创智科技有限公司 Method for extracting laser point cloud characteristics
CN111553343B (en) * 2020-04-01 2023-04-25 青岛联合创智科技有限公司 Extraction method of laser point cloud characteristics
CN111798453A (en) * 2020-07-06 2020-10-20 博康智能信息技术有限公司 Point cloud registration method and system for unmanned auxiliary positioning
CN112835085A (en) * 2020-07-09 2021-05-25 北京京东乾石科技有限公司 Method and device for determining vehicle position
CN112835086A (en) * 2020-07-09 2021-05-25 北京京东乾石科技有限公司 Method and device for determining vehicle position
CN111812658A (en) * 2020-07-09 2020-10-23 北京京东乾石科技有限公司 Position determination method, device, system and computer readable storage medium
WO2022007602A1 (en) * 2020-07-09 2022-01-13 北京京东乾石科技有限公司 Method and apparatus for determining location of vehicle
CN112835085B (en) * 2020-07-09 2022-04-12 北京京东乾石科技有限公司 Method and device for determining vehicle position
CN111812658B (en) * 2020-07-09 2021-11-02 北京京东乾石科技有限公司 Position determination method, device, system and computer readable storage medium
WO2022007504A1 (en) * 2020-07-09 2022-01-13 北京京东乾石科技有限公司 Location determination method, device, and system, and computer readable storage medium
CN112835086B (en) * 2020-07-09 2022-01-28 北京京东乾石科技有限公司 Method and device for determining vehicle position
CN111968179A (en) * 2020-08-13 2020-11-20 厦门大学 Method for positioning automatic driving vehicle in underground parking lot
CN111968179B (en) * 2020-08-13 2022-07-19 厦门大学 Automatic driving vehicle positioning method for underground parking garage
CN112581515A (en) * 2020-11-13 2021-03-30 上海交通大学 Outdoor scene point cloud registration method based on graph neural network
CN112762824B (en) * 2020-12-24 2022-04-22 中南大学 Unmanned vehicle positioning method and system
CN112762824A (en) * 2020-12-24 2021-05-07 中南大学 Unmanned vehicle positioning method and system
CN113192197A (en) * 2021-05-24 2021-07-30 北京京东乾石科技有限公司 Method, device, equipment and storage medium for constructing global point cloud map
CN113192197B (en) * 2021-05-24 2024-04-05 北京京东乾石科技有限公司 Global point cloud map construction method, device, equipment and storage medium
CN115248430A (en) * 2021-09-23 2022-10-28 上海仙途智能科技有限公司 Target object positioning method, device, terminal and medium
CN115248430B (en) * 2021-09-23 2023-08-25 上海仙途智能科技有限公司 Target object positioning method, device, terminal and medium
CN115235477A (en) * 2021-11-30 2022-10-25 上海仙途智能科技有限公司 Vehicle positioning inspection method and device, storage medium and equipment
CN117095033A (en) * 2023-07-25 2023-11-21 重庆邮电大学 Multi-mode point cloud registration method based on image and geometric information guidance
CN117095033B (en) * 2023-07-25 2024-05-24 重庆邮电大学 Multi-mode point cloud registration method based on image and geometric information guidance

Also Published As

Publication number Publication date
CN109887028B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN109887028A (en) A kind of unmanned vehicle assisted location method based on cloud data registration
CN110533722B (en) Robot rapid repositioning method and system based on visual dictionary
Riegler et al. Octnetfusion: Learning depth fusion from data
CN110415342B (en) Three-dimensional point cloud reconstruction device and method based on multi-fusion sensor
Elbaz et al. 3D point cloud registration for localization using a deep neural network auto-encoder
Yang et al. Go-ICP: A globally optimal solution to 3D ICP point-set registration
Lukács et al. Faithful least-squares fitting of spheres, cylinders, cones and tori for reliable segmentation
CN106651942A (en) Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points
Yu et al. Robust robot pose estimation for challenging scenes with an RGB-D camera
Tao et al. A pipeline for 3-D object recognition based on local shape description in cluttered scenes
CN115937404A (en) Grating three-dimensional reconstruction system and method based on multi-view shadow segmentation
Jiang et al. Learned local features for structure from motion of uav images: A comparative evaluation
Bergamasco et al. A robust multi-camera 3D ellipse fitting for contactless measurements
CN114396877A (en) Intelligent three-dimensional displacement field and strain field measurement method oriented to material mechanical properties
Sánchez-Belenguer et al. RISE: A novel indoor visual place recogniser
Loaiza et al. Matching segments in stereoscopic vision
CN115761684A (en) AGV target recognition and attitude angle resolving method and system based on machine vision
CN115937520A (en) Point cloud moving target segmentation method based on semantic information guidance
Hyeon et al. KR-Net: A dependable visual kidnap recovery network for indoor spaces
CN115601423A (en) Edge enhancement-based round hole pose measurement method in binocular vision scene
Ramasinghe et al. Blended convolution and synthesis for efficient discrimination of 3D shapes
Dierenbach et al. Next-Best-View method based on consecutive evaluation of topological relations
Deng et al. Fusion Scan Context: A Global Descriptor Fusing Altitude, Intensity and Density for Place Recognition
Sales et al. 3D shape descriptor for objects recognition
Salih et al. Noise robustness analysis of point cloud descriptors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant