CN109934868B - Vehicle positioning method based on matching of three-dimensional point cloud and satellite map - Google Patents
Vehicle positioning method based on matching of three-dimensional point cloud and satellite map Download PDFInfo
- Publication number
- CN109934868B CN109934868B CN201910204082.4A CN201910204082A CN109934868B CN 109934868 B CN109934868 B CN 109934868B CN 201910204082 A CN201910204082 A CN 201910204082A CN 109934868 B CN109934868 B CN 109934868B
- Authority
- CN
- China
- Prior art keywords
- particle
- map
- vehicle
- particles
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Navigation (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides an autonomous vehicle positioning method based on matching of a three-dimensional point cloud and a satellite map, which can be used for positioning an autonomous vehicle based on matching of the three-dimensional point cloud and the satellite map, and is a new positioning method which utilizes environmental three-dimensional point cloud information obtained by a vehicle-mounted sensor, combines an output result of a vehicle dead reckoning method, utilizes a particle filter to realize positioning of the vehicle on the satellite map or an aerial image, further obtains a global coordinate and a course angle of the vehicle, can provide auxiliary positioning information for a GPS (global positioning system); the method compares the similarity of a grid image generated by point cloud and a satellite image block for the particle weight of the particle filter through a neural network, is compatible with various sensors and various dead reckoning methods, and has strong environmental adaptability.
Description
Technical Field
The invention relates to the technical field of automatic control, in particular to an autonomous vehicle positioning method based on matching of three-dimensional point cloud and a satellite map.
Background
An unmanned vehicle is a mobile robot, and is a highly intelligent aggregate which relates to the aspects of environment perception, positioning and navigation, task planning, machine learning and the like. In recent years, with the rise of technologies such as deep learning, unmanned technologies have also been greatly advanced and have attracted much attention.
Positioning is one of the key technologies for unmanned driving. The position of the vehicle can be obtained by adopting the satellite positioning system, but due to the urban canyon effect, when the satellite signal is shielded or reflected by a building, the satellite positioning system can reduce the precision and even fail to position. In order to increase the accuracy and reliability of the whole positioning system, the positioning can be performed by using the constructed map as a supplement to the satellite positioning. At present, most of the mainstream methods adopt a plurality of map collecting vehicles to traverse the area to be mapped to realize the construction of the map, and the method needs high time, manpower and material resources.
Disclosure of Invention
In view of this, the invention provides an autonomous vehicle positioning method based on matching of three-dimensional point cloud and satellite map, which can provide a new positioning method.
An autonomous vehicle positioning method for matching three-dimensional point cloud with a satellite map comprises the following steps:
step 1, pre-training a neural network, specifically: acquiring satellite image blocks corresponding to the same area on the whole map and a grid map generated by three-dimensional point cloud as a pair of positive samples, wherein the expected value of the pair of positive samples is the matching probability of the two, and the value is 1; meanwhile, acquiring a grid map generated by satellite image blocks and three-dimensional point clouds respectively corresponding to two different areas on the whole map to form a pair of negative samples, wherein the expected values of the pair of negative samples are the matching probability of the two negative samples, and the value of the expected values is 0; acquiring a plurality of pairs of positive samples and a plurality of pairs of negative samples, training a neural network, and outputting the network as the matching probability of the satellite image blocks and the grid map;
step 2, initializing all particles of the particle filter to obtain the position and the course of each particle of the particle filterAnd weightAn initial value;
step 3, updating the position, the course and the weight value of the particle according to the data of the dead reckoning module carried by the vehicle based on the position, the course and the initial weight value of the particle obtained in the step 2;
according to the updated position x of each particlei,yiTaking the position of each particle as a geometric center, and capturing a screenshot image block with the size consistent with that of the satellite image block in the step 1 on the mapThe intercepted image blockThe included angle between the direction of the particle and the north direction of the earth coordinate is the heading angle psi of the particlei;
Step 4, constructing a grid graph of an overlooking visual angle by using the three-dimensional point cloud obtained by the vehicle-mounted sensor;
step 5, combining each intercepted image block intercepted in the step 3 and the grid map obtained in the step 4 to form a pair of samples, inputting the samples into the neural network trained in the step 1, and obtaining the matching probability of the two samples; each particle corresponds to a matching probability;
for each particle, multiplying the corresponding matching probability by the normalized weight of the particle at the previous moment to obtain the weight of the particle at the current moment;
and 6, according to the position and the heading of the particles after being updated in the step 3 and the weight after being updated in the step 5, respectively carrying out weighted summation on the position and the heading and then carrying out averaging to obtain estimated values of the position and the heading of the vehicle at the current moment so as to complete vehicle positioning.
Preferably, in the step 2, the initialization mode of the particle filter is to randomly broadcast the particles in the whole map range to obtain the initial values of the position, the course and the weight of each particle on the map; or randomly broadcasting the particles near the initial position of the vehicle determined by satellite positioning and other methods to obtain the position, the course and the initial weight value of each particle on the map.
Preferably, in step 3, the dead reckoning module is a wheel type odometer, a visual odometer, a laser odometer or an inertial navigation system.
Preferably, when the dead reckoning module is a wheel-type odometer, for example, the particle update formula is as follows:
wherein, Δ s is the distance increment of vehicle driving from the last moment to the current moment measured by the odometer, and Δ ψ is the heading angle increment.
Preferably, after the particle update, random noise is added to each particle according to a noise model of the odometer.
Preferably, after step 5 is performed and before step 6, the number of valid particles is calculated:
n represents the number of particles; if N is presenteffLess than a set threshold T and wmaxIf the particle size is larger than the set threshold value P, resampling the particles, and executing the step 6 based on the resampled particles; if N is not satisfiedeffLess than a set threshold T and wmaxIf the value is greater than the set threshold value P, directly executing the step 6; w is amaxRepresents the maximum of all particle weights.
Preferably, the threshold T is set to 0.6 n; the threshold P is set to 0.8.
Preferably, the particle Resampling method is a structured sampling method.
The invention has the following beneficial effects:
the invention provides an autonomous vehicle positioning method based on matching of a three-dimensional point cloud and a satellite map, which can be used for positioning an autonomous vehicle based on matching of the three-dimensional point cloud and the satellite map, and is a new positioning method which utilizes environmental three-dimensional point cloud information obtained by a vehicle-mounted sensor, combines an output result of a vehicle dead reckoning method, utilizes a particle filter to realize positioning of the vehicle on the satellite map or an aerial image, further obtains a global coordinate and a course angle of the vehicle, can provide auxiliary positioning information for a GPS (global positioning system); the method compares the similarity of a grid image generated by point cloud and a satellite image block for the particle weight of the particle filter through a neural network, is compatible with various sensors and various dead reckoning methods, and has strong environmental adaptability.
Drawings
FIG. 1 is a flow chart of an autonomous vehicle positioning method based on matching of a three-dimensional point cloud and a satellite map.
Detailed Description
Based on the defects of the prior art, the invention considers that if the vehicle-mounted sensor data (such as laser radar, camera and the like) can be used for matching with the satellite map, the position of the vehicle can be conveniently obtained, and the map does not need to be constructed in advance. The invention provides an autonomous vehicle positioning method based on matching of three-dimensional point cloud and satellite images or aerial images, which comprises the following specific processes as shown in figure 1:
step 1, pre-training a neural network, specifically: acquiring satellite image blocks corresponding to the same area on the whole map and a grid map generated by three-dimensional point cloud as a pair of positive samples, wherein the expected value of the pair of positive samples is the matching probability of the two, and the value is 1; acquiring satellite image blocks and grid images of a plurality of areas to obtain a plurality of pairs of samples; meanwhile, acquiring a grid map generated by satellite image blocks and three-dimensional point clouds respectively corresponding to two different areas on the whole map to form a pair of negative samples, wherein the expected values of the pair of negative samples are the matching probability of the two negative samples, and the value of the expected values is 0; and obtaining a plurality of pairs of positive samples and a plurality of pairs of negative samples according to the method, training the neural network, and outputting the network as the matching probability of the satellite image block and the grid map.
Step 2, initializing all particles of the particle filter, specifically: each particle of the particle filterA set of positions and headings are savedAnd weightThe initialization mode can adopt the mode of randomly broadcasting particles in the whole map range to obtain the initial values of the position, the course and the weight of each particle on the map; or randomly scattering particles near the initial position of the vehicle determined by satellite positioning or the likeObtaining the position, the course and the initial weight value of each particle on the map;
step 3, updating the position, the course and the weight value of the particle according to the data of the dead reckoning module carried by the vehicle based on the position, the course and the initial weight value of the particle obtained in the step 2; the dead reckoning module can be a wheel type odometer, a visual odometer, a laser odometer, an inertial navigation system or other dead reckoning modules. Taking a wheel type odometer as an example, the update formula of the particles is as follows:
wherein, Δ s is the distance increment of vehicle driving from the last moment to the current moment measured by the odometer, and Δ ψ is the heading angle increment. Then random noise (e.g., gaussian noise) is added to each particle according to the noise model of the odometer.
According to the updated position x of each particlei,yiTaking the position of each particle as a geometric center, and capturing a screenshot image block with the size consistent with that of the satellite image block in the step 1 on the mapThe intercepted image blockThe included angle between the direction of the particle and the north direction of the earth coordinate is the heading angle psi of the particlei;
Step 4, constructing a grid graph G of an overlooking visual angle by utilizing three-dimensional point cloud obtained by the vehicle-mounted sensorL={GH,GI}。GHRepresents the maximum height value, G, of all points stored by each grid that fall within that gridIIndicating the corresponding reflected intensity information (if the sensor is a lidar) or color information (if the sensor is a camera) for that highest point.
Step 5, combining each intercepted image block intercepted in step 3 and the grid map obtained in step 4 to form a pair of samples, and inputting the samples into step1 training the neural network to obtain the matching probability of the twoEach particle corresponds to a matching probability;
for each particle, using its corresponding match probabilityUpdating the weight of the particle, wherein the updating formula is as follows:
whereinThe normalized weight of the particle at the last moment; record the maximum value w of all particle weightsmaxThen weighting the particlesIs normalized to obtain
And 6, in order to prevent the particle filter from generating a degradation phenomenon (after a plurality of iterations, the weights of a large number of particles are almost 0 except the weight of a few particles), whether resampling is carried out needs to be determined according to the weight condition of the particles of the particle filter. Firstly, calculating the number of effective particles:
n represents the number of particles; if N is presenteffLess than a set threshold T and wmaxIf the current value is greater than the set threshold value P, it means that the number of valid particles (particles with large weight) at the current moment is small, and the particles with sufficiently high weight exist at the current moment, and it is necessary to determine thatThe particles are to be resampled; if resampling is not needed, directly executing step 7; the set threshold value T and the set threshold value P are empirically set, and T is generally 0.6n and P is generally 0.8. The Resampling method can adopt a structured sampling method, which determines the probability of Resampling according to the proportion of the weight of the particle in the total weight of all particles.
And 7, determining the position and the course of the resampled particles after being updated in the step 3 and the updated weight in the step 5, and obtaining the estimated values of the position and the course of the vehicle at the current moment by utilizing a weighted summation and averaging mode (the position of each particle is multiplied by the respective weight and then summed and averaged to obtain a position estimated value, and the course of each particle is multiplied by the respective weight and then summed and averaged to obtain a course estimated value), so as to complete the positioning of the vehicle.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. An autonomous vehicle positioning method for matching a three-dimensional point cloud with a satellite map is characterized by comprising the following steps:
step 1, pre-training a neural network, specifically: acquiring satellite image blocks corresponding to the same area on the whole map and a grid map generated by three-dimensional point cloud as a pair of positive samples, wherein the expected value of the pair of positive samples is the matching probability of the two, and the value is 1; meanwhile, acquiring a grid map generated by satellite image blocks and three-dimensional point clouds respectively corresponding to two different areas on the whole map to form a pair of negative samples, wherein the expected values of the pair of negative samples are the matching probability of the two negative samples, and the value of the expected values is 0; acquiring a plurality of pairs of positive samples and a plurality of pairs of negative samples, training a neural network, and outputting the network as the matching probability of the satellite image blocks and the grid map;
step 2, initializing all particles of the particle filter to obtain the position and the course of each particle of the particle filterAnd weightAn initial value;
step 3, updating the position, the course and the weight value of the particle according to the data of the dead reckoning module carried by the vehicle based on the position, the course and the initial weight value of the particle obtained in the step 2;
according to the updated position x of each particlei,yiTaking the position of each particle as a geometric center, and capturing a screenshot image block with the size consistent with that of the satellite image block in the step 1 on the mapThe intercepted image blockThe included angle between the direction of the particle and the north direction of the earth coordinate is the heading angle psi of the particlei;
Step 4, constructing a grid graph of an overlooking visual angle by using the three-dimensional point cloud obtained by the vehicle-mounted sensor;
step 5, combining each intercepted image block intercepted in the step 3 and the grid map obtained in the step 4 to form a pair of samples, inputting the samples into the neural network trained in the step 1, and obtaining the matching probability of the two samples; each particle corresponds to a matching probability;
for each particle, multiplying the corresponding matching probability by the normalized weight of the particle at the previous moment to obtain the weight of the particle at the current moment;
and 6, according to the position and the heading of the particles after being updated in the step 3 and the weight after being updated in the step 5, respectively carrying out weighted summation on the position and the heading and then carrying out averaging to obtain estimated values of the position and the heading of the vehicle at the current moment so as to complete vehicle positioning.
2. The method as claimed in claim 1, wherein in step 2, the initialization of the particle filter is performed by randomly broadcasting particles throughout the map to obtain the initial values of the position, heading and weight of each particle on the map; or randomly broadcasting the particles near the initial position of the vehicle determined by a satellite positioning method to obtain the position, the course and the initial weight value of each particle on the map.
3. The method as claimed in claim 1, wherein the dead reckoning module in step 3 is a wheel type odometer, a visual odometer, a laser odometer or an inertial navigation system.
5. The method of claim 4, wherein after the updating of the particles, random noise is added to each particle according to a noise model of the odometer.
6. The method of claim 1, wherein after step 5 is performed and before step 6, the number of valid particles is calculated as:
n represents the number of particles; if N is presenteffLess than a set threshold T and wmaxIf the particle size is larger than the set threshold value P, resampling the particles, and executing the step 6 based on the resampled particles; if N is not satisfiedeffLess than a set threshold T and wmaxIf the value is greater than the set threshold value P, directly executing the step 6; w is amaxRepresents the maximum of all particle weights.
7. The method as claimed in claim 6, wherein a threshold T is set to 0.6 n; the threshold P is set to 0.8.
8. The method of claim 6, wherein the particle Resampling method is a structured sampling method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910204082.4A CN109934868B (en) | 2019-03-18 | 2019-03-18 | Vehicle positioning method based on matching of three-dimensional point cloud and satellite map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910204082.4A CN109934868B (en) | 2019-03-18 | 2019-03-18 | Vehicle positioning method based on matching of three-dimensional point cloud and satellite map |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109934868A CN109934868A (en) | 2019-06-25 |
CN109934868B true CN109934868B (en) | 2021-04-06 |
Family
ID=66987518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910204082.4A Active CN109934868B (en) | 2019-03-18 | 2019-03-18 | Vehicle positioning method based on matching of three-dimensional point cloud and satellite map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934868B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110243381B (en) * | 2019-07-11 | 2020-10-30 | 北京理工大学 | Cooperative sensing monitoring method for air-ground robot |
CN110568447B (en) * | 2019-07-29 | 2022-03-08 | 广东星舆科技有限公司 | Visual positioning method, device and computer readable medium |
CN110865403B (en) * | 2019-10-18 | 2024-03-05 | 尚元智行(宁波)科技有限公司 | Positioning method based on neural network pre-learning and wheel type odometer fusion |
CN111159642B (en) * | 2019-11-28 | 2023-04-28 | 南京航空航天大学 | Online track prediction method based on particle filtering |
CN110967011B (en) * | 2019-12-25 | 2022-06-28 | 苏州智加科技有限公司 | Positioning method, device, equipment and storage medium |
CN112068174B (en) * | 2020-08-18 | 2021-11-23 | 三一专用汽车有限责任公司 | Positioning method, positioning apparatus, and computer-readable storage medium |
CN114440901A (en) * | 2020-11-05 | 2022-05-06 | 北理慧动(常熟)车辆科技有限公司 | Global hybrid map creation method |
CN112465878B (en) * | 2021-02-02 | 2021-05-11 | 北京邮电大学 | Particle filter-based position prediction method and device |
CN116559927B (en) * | 2023-07-11 | 2023-09-22 | 新石器慧通(北京)科技有限公司 | Course angle determining method, device, equipment and medium of laser radar |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184852B (en) * | 2015-08-04 | 2018-01-30 | 百度在线网络技术(北京)有限公司 | A kind of urban road recognition methods and device based on laser point cloud |
CN105701479B (en) * | 2016-02-26 | 2019-03-08 | 重庆邮电大学 | Intelligent vehicle multilasered optical radar fusion identification method based on target signature |
CN106940704B (en) * | 2016-11-25 | 2019-12-20 | 北京儒博科技有限公司 | Positioning method and device based on grid map |
CN108303721B (en) * | 2018-02-12 | 2020-04-03 | 北京经纬恒润科技有限公司 | Vehicle positioning method and system |
CN109443351B (en) * | 2019-01-02 | 2020-08-11 | 亿嘉和科技股份有限公司 | Robot three-dimensional laser positioning method in sparse environment |
-
2019
- 2019-03-18 CN CN201910204082.4A patent/CN109934868B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109934868A (en) | 2019-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109934868B (en) | Vehicle positioning method based on matching of three-dimensional point cloud and satellite map | |
CN110631593B (en) | Multi-sensor fusion positioning method for automatic driving scene | |
US11747822B2 (en) | Mobile robot system and method for autonomous localization using straight lines extracted from visual images | |
CN103389103B (en) | A kind of Characters of Geographical Environment map structuring based on data mining and air navigation aid | |
RU2759975C1 (en) | Operational control of autonomous vehicle with visual salence perception control | |
JP2015006874A (en) | Systems and methods for autonomous landing using three dimensional evidence grid | |
CN111339876B (en) | Method and device for identifying types of areas in scene | |
EP3842836A1 (en) | Method, apparatus and storage medium for positioning object | |
Yamauchi | Autonomous urban reconnaissance using man-portable UGVs | |
US11579622B2 (en) | Systems and methods for utilizing images to determine the position and orientation of a vehicle | |
CN115342821A (en) | Unmanned vehicle navigation cost map construction method under complex unknown environment | |
CN111089580B (en) | Unmanned war chariot simultaneous positioning and map construction method based on covariance intersection | |
Gustafsson et al. | Navigation and tracking of road-bound vehicles | |
Rabe et al. | Ego-lane estimation for downtown lane-level navigation | |
JP2010225126A (en) | Moving path planning apparatus, mobile body control apparatus and mobile body | |
US11315417B2 (en) | Method, device and system for wrong-way driver detection | |
RU2767838C1 (en) | Methods and systems for generating training data for detecting horizon and road plane | |
CN115373402A (en) | Loader running control method, device and equipment and storage medium | |
Tyugin et al. | The Exploration of Autonomous Mobile Robot Movement Characteristics in Difficult Off-road Conditions of a Coastal Zone | |
US20230154038A1 (en) | Producing a depth map from two-dimensional images | |
US20240127603A1 (en) | Unified framework and tooling for lane boundary annotation | |
US20230194301A1 (en) | High fidelity anchor points for real-time mapping with mobile devices | |
Liu et al. | Research on SLAM Method of Intelligent Vehicle Based on Lidar-IMU Combination | |
CN116862792A (en) | Point cloud data processing method and device, electronic equipment and storage medium | |
JP2022047756A (en) | Three-dimensional map data creation program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |