CN114518108B - Positioning map construction method and device - Google Patents

Positioning map construction method and device Download PDF

Info

Publication number
CN114518108B
CN114518108B CN202011298342.8A CN202011298342A CN114518108B CN 114518108 B CN114518108 B CN 114518108B CN 202011298342 A CN202011298342 A CN 202011298342A CN 114518108 B CN114518108 B CN 114518108B
Authority
CN
China
Prior art keywords
module
map
laser
point cloud
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011298342.8A
Other languages
Chinese (zh)
Other versions
CN114518108A (en
Inventor
朱永丰
田超
张昆帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutong Bus Co Ltd
Original Assignee
Yutong Bus Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yutong Bus Co Ltd filed Critical Yutong Bus Co Ltd
Priority to CN202011298342.8A priority Critical patent/CN114518108B/en
Publication of CN114518108A publication Critical patent/CN114518108A/en
Application granted granted Critical
Publication of CN114518108B publication Critical patent/CN114518108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)

Abstract

The invention belongs to the technical field of automatic driving, and particularly relates to a positioning map construction method and device. Firstly, acquiring sensing data of multi-source sensing measurement equipment at the current moment; then, different position and posture calculation modules calculate, process and output a plurality of position and posture at the current moment; then, according to the influence factors influencing the precision of the dead reckoning modules, the weight duty ratio of the dead reckoning module dead reckoning results is distributed in a self-adaptive mode; finally, according to the weight duty ratio, carrying out normalization weighting on the results of each pose estimation module to obtain an average value, and outputting the position and pose results of the automatic driving vehicle at the current moment; and projecting the laser frame data at the moment to a map coordinate system according to a final position and posture result, and carrying out map construction. According to the method, the pose estimation results of the various pose estimation modules are fused, so that the motion trail of the automatic driving vehicle obtained in various driving scenes is stable and accurate, and a high-precision and strong-robustness laser positioning point cloud map is constructed.

Description

Positioning map construction method and device
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to a positioning map construction method and device.
Background
The positioning map is a basis for realizing automatic driving, can help an automatic driving vehicle to sense and position, can identify various elements (such as road information) in a traffic scene, and can also perform obstacle detection so as to plan a proper driving route and driving speed and avoid obstacles. The traditional electronic map is mainly generated by means of satellite images and then positioned by a GPS, so that the meter-level precision can be achieved. The unmanned map with the precision reaching the centimeter level is required to be defined in a refined way, and the traditional electronic map is not applicable any more.
In order to construct a high-precision positioning map, a method of fusing multiple sensors is often adopted, and the high-precision map is constructed by utilizing the data of the multiple sensors. For example, chinese patent application publication No. CN109887057a discloses a method for generating a high-precision map, which directly performs fusion processing on a pose measured by a global navigation satellite system GNSS/inertial measurement unit IMU, a camera pose, and a radar pose, to obtain a more accurate pose estimation. However, the positioning result of each positioning system is not good at all times, and thus the accurate posture estimation obtained in various environments cannot be ensured. For example, in some environments where signals such as tunnels and underground garages are weak or even have no signals, the positioning result of the GNSS is poor, and if the positioning result of the GNSS is still believed to be relatively poor, the pose estimation result cannot be ensured, and thus the accuracy of the map cannot be ensured.
Disclosure of Invention
The invention provides a positioning map construction method and device, which are used for solving the problems of poor positioning effect and low accuracy of constructed maps caused by directly fusing the detection results of various sensors in the prior art.
In order to solve the technical problems, the technical scheme of the invention comprises the following steps:
the invention provides a positioning map construction method, which comprises the following steps:
1) Acquiring surrounding environment information of mobile equipment at a certain moment;
2) Acquiring data synchronized by each position and orientation calculation module arranged on the mobile equipment, and carrying out corresponding processing to obtain position and orientation calculation results of each position and orientation calculation module on the mobile equipment at the moment; each position and pose calculation module comprises at least two of a GNSS combined navigation positioning module, a wheel type speedometer calculation module, a laser speedometer calculation module and a visual speedometer calculation module, and the position and pose calculation result comprises a position and a pose;
3) Determining the weight duty ratio of each dead reckoning module according to the influence factors influencing the dead reckoning precision of each dead reckoning module at the moment, and carrying out normalization processing; the influence factors of the GNSS integrated navigation positioning module comprise satellite quantity, channel ratio, precision factors and covariance ratio, the influence factors of the wheel type odometer recurrence module comprise recurrence time length and encoder precision of the wheel type odometer, the influence factors of the laser odometer recurrence module comprise recurrence time length and feature points of the laser odometer, and the influence factors of the visual odometer recurrence module comprise recurrence time length and interframe matching degree of the visual odometer;
4) According to the weight duty ratio of each dead reckoning module after normalization processing, weighting and summing the dead reckoning results of each dead reckoning module at the moment, and taking the obtained result as the dead reckoning result of the mobile equipment at the moment;
5) And carrying out map construction according to the dead reckoning result of the mobile equipment at each moment and the surrounding environment information.
The beneficial effects of the technical scheme are as follows: according to the method, the weight ratio of each dead reckoning module is determined by using influence factors influencing the dead reckoning precision of each dead reckoning module, dead reckoning results of each dead reckoning module are screened and judged according to the weight ratio, so that dead reckoning results of the dead reckoning module with stable output signals and higher precision of reckoning results are higher, dead reckoning results of other dead reckoning modules are lower, dead reckoning results of the mobile equipment are obtained by fusing dead reckoning results of a plurality of dead reckoning modules according to the ratio, and then the motion track of an automatic driving vehicle is stable and accurate in various driving scenes, including open road sections such as a closed park, semi-closed or closed scenes such as a long tunnel and an underground parking lot, and then a high-precision and robust laser positioning point cloud map can be constructed by combining environmental information around the automatic driving vehicle.
Further, the weight ratio of the GNSS integrated navigation positioning module is as follows: weight (Weight) GNSS =(δ Number of satellitesChannel ratioDopRatio ) 4; wherein delta Number of satellites Delta as a coefficient related to the number of satellites Channel ratio Delta as a coefficient related to channel ratio Dop Delta as a factor related to the precision factor Ratio Is a coefficient related to the covariance ratio;
the Weight ratio of the wheel type odometer calculation module is Weight Wheel type odometer =(δ Recurrence time length of wheel type odometerEncoder influencing factor ) 2; wherein delta Recurrence time length of wheel type odometer Delta is a coefficient related to the recursive duration of the wheel type odometer calculation module Encoder accuracy Is a coefficient related to encoder accuracy;
the weight ratio of the laser odometer calculation module is as follows: weight (Weight) Laser odometer =(δ Recursion time of laser odometerFeature points ) 2; wherein delta Recursion time of laser odometer Delta is a coefficient related to the recursive duration of the laser odometer calculation module Feature points Is a coefficient related to the feature points;
the weight ratio of the visual odometer calculation module is as follows: weight (Weight) Visual odometer =(δ Recursion duration of visual odometerDegree of inter-frame matching ) 2; wherein delta Recursion duration of visual odometer Delta is a coefficient related to the recursive duration of the visual odometer calculation module Degree of inter-frame matching Is a coefficient related to the degree of inter-frame matching.
Further, the weight ratio of the GNSS integrated navigation positioning module is as follows:
wherein Ratio is the covariance Ratio of the output result of the GNSS integrated navigation positioning and attitude determination module, and is calculated according to the covariance of the output result of the GNSS integrated navigation positioning and attitude determination module, and ratio=variance Suboptimal solution Variance/variance Optimal solution
The coefficient delta related to the recursive duration of the wheel type mileage calculation module Recurrence time length of wheel type odometer The method comprises the following steps:
wherein C is 1 For the constant coefficient of the recursive duration of the wheel type odometer, deltat 1 A recursive time interval for a wheel odometer;
the coefficient delta related to the encoder precision Encoder accuracy The method comprises the following steps:
δ encoder accuracy =C 2 *D Front-rear wheelbase +C 3 *r Radius of wheel
Wherein C is 2 For coding precision constant coefficient, D Front-rear wheelbase C is the distance between the front axle and the rear axle 3 R is the constant coefficient of the encoder precision Radius of wheel The radius of the wheels is four wheels.
Further, the coefficient delta related to the recurrence time length of the laser mileage reckoning module Recursion time of laser odometer The method comprises the following steps:
wherein C is 4 Constant coefficient of time length deltat for recursion of laser odometer 2 A recursive time interval for the laser odometer;
the coefficient delta related to the feature points Feature points The method comprises the following steps:
wherein C is 5 Is a constant coefficient of the number of feature points, n Laser point cloud total For the total number of laser frame point clouds for laser milemeter calculation, n Matched feature points The number of the laser point clouds matched between adjacent frames;
the coefficient delta related to the recursive duration of the visual odometry calculation module Recursion duration of visual odometer The method comprises the following steps:
wherein C is 6 Constant value coefficient, deltat, for recursive duration of visual odometer 3 A recursive time interval for a visual odometer;
the coefficient delta related to the matching degree between frames Degree of inter-frame matching The method comprises the following steps:
wherein C is 7 Constant coefficient of interframe matching degree, x, for visual odometer i I=1, 2, … …, n, |x, for the position of the i-th feature point i -x i+1 And I is the Euclidean distance difference value of the matched visual feature points.
Further, in step 1), in order to obtain accurate surrounding information, the surrounding information includes laser point cloud data of the surrounding, or includes laser point cloud data of the surrounding and visual image data.
Further, each of the dead reckoning modules comprises a GNSS integrated navigation positioning module, and the dead reckoning result of the GNSS integrated navigation positioning module at the initial moment is used as the dead reckoning result of the mobile equipment at the initial moment.
Further, in order to reject the ground point cloud, in step 5), after the map is constructed, the method further includes a step of filtering the ground point cloud in the map according to the elevation of the laser point cloud in the map:
judging whether the elevation of the laser point cloud is larger than an elevation threshold value: if the elevation threshold value is greater than or equal to the elevation threshold value, the laser point cloud is a non-ground point cloud, and the laser point cloud data is reserved; if the laser point cloud is smaller than the elevation threshold value, the laser point cloud is the ground point cloud, and the laser point cloud data is filtered.
Further, in step 5), after the good map is constructed, the method further includes a step of filtering the dynamic objects in the map in order to remove the dynamic objects to improve the accuracy of the map:
local maps of the same scene at different moments are obtained, grid division is carried out on each local map, and laser point cloud data of the same grid at different moments are compared: the grid contains a target object at different moments, and the target object is determined to be a non-dynamic target object, and the target object is reserved; and if the grid does not contain the target at least at one moment and the grid contains the target at least at one moment, determining the target as a dynamic target, and filtering the target.
Further, in step 5), in order to reduce the search range in the laser matching positioning process, after the good map is constructed, the method further includes the step of performing grid processing on the map:
dividing the constructed map into n x m grids, filling the corresponding laser point cloud data into the corresponding grids, and forming a sub-map by each grid and storing.
The invention also provides a positioning map construction device which comprises a memory and a processor, wherein the processor is used for executing instructions stored in the memory to realize the positioning map construction method and achieve the same effects as the positioning map construction method.
Drawings
FIG. 1 is an overall flow chart of a positioning map construction method of the present invention;
FIG. 2 is a flow chart of the data synchronous acquisition of the present invention;
FIG. 3 is a flow chart of the present invention for constructing a multi-source fusion map;
FIG. 4 is a flow chart of the present invention for filtering out a ground point cloud;
FIG. 5 is a flow chart of the present invention for filtering dynamic objects;
FIG. 6 is a flow chart of the map tile process of the present invention;
fig. 7 is a block diagram of the positioning map construction apparatus of the present invention.
Detailed Description
In the moving process of the automatic driving vehicle, the invention utilizes various sensors arranged on the automatic driving vehicle to obtain the position and posture estimation result (including the position and the posture) of the automatic driving vehicle on one hand, and can obtain the surrounding environment information of the automatic driving vehicle on the other hand, and utilizes the two information to construct the laser point cloud map. After the laser point cloud map is obtained, when the automatic driving vehicle is in an automatic driving mode, path planning can be performed by using the constructed laser point cloud map.
A positioning map construction method and a positioning map construction apparatus of the present invention will be described in detail with reference to the drawings and embodiments.
Method embodiment:
the invention is additionally provided with various sensor devices on the automatic driving vehicle, including GNSS integrated navigation devices, laser radars, binocular vision devices, encoders and the like. The method comprises the steps of utilizing a plurality of laser radars to realize sensing measurement of surrounding environment information of an automatic driving vehicle; the GNSS combined navigation equipment can be used for forming a GNSS combined navigation pose estimation module, the laser radar and the IMU equipment in the GNSS combined navigation can be used for forming a laser odometer pose estimation module, the encoder can be used for forming a wheel type odometer pose estimation module, the binocular vision equipment and the IMU equipment in the GNSS combined navigation can be used for forming a vision odometer pose estimation module, and pose estimation of an automatic driving vehicle can be realized. The GNSS integrated navigation device is composed of a GPS receiver and an inertial unit IMU, and the GPS receiver and the inertial unit IMU are combined to output the position and the attitude information of the vehicle. In addition, the laser odometer and the visual odometer need to be corrected and fused by combining IMU information in GNSS integrated navigation in the calculation process so as to output stable position and posture information.
By using the above-mentioned automatic driving vehicle, a positioning map construction method of the present invention can be implemented, and the whole map construction process will be specifically described with reference to fig. 1. Throughout the process, the autonomous vehicle is in a non-autonomous driving mode.
Step one, acquiring vehicle body information data of an automatic driving vehicle, acquiring data acquired by various sensor devices in the moving process of the automatic driving vehicle, wherein a vision camera comprises pixel gray information, RGB color information, environment texture information and the like, a laser radar comprises a spatial position information line, surface characteristic point environment structure information and the like, a wheel type odometer comprises three-axis acceleration, three-axis four-degree information and the like, and performing data clock synchronization and offline storage after file encryption. The specific flow is shown in fig. 2, and includes:
1. and starting the automatic driving vehicle from the area with stable signals of the GNSS integrated navigation equipment, recording the position and attitude estimation result of the GNSS integrated navigation positioning module at the initial moment, taking the position and attitude estimation result as the position and attitude estimation result of the automatic driving vehicle at the initial moment, wherein the position and attitude estimation result comprises the position and the attitude of the automatic driving vehicle, and finishing the initialization operation.
2. And synchronously acquiring data of different sensors through a clock synchronization system of the GNSS integrated navigation device. In order to improve the storage efficiency and the clock synchronization precision, the clocks of the GNSS integrated navigation equipment are used as trigger signals to complete clock synchronization of different data of each sensor equipment, a plurality of non-blocking threads are started to store data, time soft synchronization of the data is maximized, and finally a private data format is encrypted and stored.
Loading the data of each sensor after synchronization, and mapping visual texture information to laser point cloud data according to internal and external calibration parameters among laser radar equipment, GNSS antenna equipment, binocular vision equipment and other equipment to obtain surrounding environment information of an automatic driving vehicle; then, according to a corresponding weight distribution strategy, obtaining stable track position and posture data of the automatic driving vehicle; and (3) carrying out certain rotation, scaling and translation on the data, converting the data into a unified geodetic coordinate system, completing rapid splicing and fusion of all laser frames and visual data, and finally outputting and storing the encrypted private map file. The specific flow is shown in fig. 3, and includes:
1. and synchronously reading laser frame data at the time t according to the time stamp, and performing rotation, translation and scaling on the laser frame data according to the internal and external calibration parameters of each laser radar device to finish the splicing of a plurality of laser radar data among single frames.
LidarFramOut n =LidarFramIn n *R n +T n (1)
LidarFramOut=LidarFramOut 1 +…LidarFramOut k +…+LidarFramOut n (2)
Where k=1, 2, …, n represents the lidar sensor number; r is R n Representing a rotation matrix andT n representing a translation matrix and->LidarFramIn n Representing a laser point cloud data set before laser radar conversion with the number of n; lidarFrameOut n Representing a laser point cloud data set converted by the laser radar with the number n; lidarframbout represents a laser point cloud data set after a plurality of lidar conversions are spliced.
2. And determining the weight duty ratio of each dead reckoning module according to the influence factors influencing the dead reckoning precision of each dead reckoning module at the time t.
(1) The GNSS integrated navigation positioning module positions and fixes the pose, and the influencing factors influencing the pose calculation precision comprise satellite quantity, channel ratio, precision factor and covariance ratio, and each influencing factor corresponds to the coefficient thereof and is delta respectively Number of satellites 、δ Channel ratio 、δ Dop 、δ Ratio The method comprises the steps of carrying out a first treatment on the surface of the Wherein delta Number of satellites Delta as an influence coefficient related to the number of satellites Channel ratio Delta as an influence coefficient related to channel ratio Dop Delta as an influence coefficient related to the precision factor Ratio Is the influence coefficient related to the covariance ratio.
According to the covariance of the output result of the GNSS combined navigation positioning and attitude determination module, covariance Ratio = variance is obtained Suboptimal solution Variance/variance Optimal solution The method mainly comprises 3 positioning states of fixed solutions (Ratio is more than or equal to 3), floating solutions (Ratio is more than or equal to 1 and less than 3) and single-point solutions (Ratio is more than 0 and less than 1). The positioning precision of the fixed solution is within 3 cm, the positioning precision is highest, the confidence coefficient is highest, and the weight is greatest; the positioning accuracy of the floating solution is less than 50 cm, the positioning accuracy is next highest, the confidence is next highest, and the weight is next highest; the positioning accuracy of single-point solution is generally between 5 and 10 metersSometimes, the method is larger, the positioning accuracy is basically unavailable, the confidence is worst, and the weight is minimum.
And according to three states, carrying out weight distribution on the output positioning and attitude determination result:
(2) the wheel type odometer calculation module positions and fixes the gesture, the vehicle position is positioned through the speed, the acceleration and the heading of the vehicle body information, the position and gesture calculation result of the wheel type odometer calculation module forms a linear growth relation with the recursive duration, the encoder precision and the like of the wheel type odometer, and the corresponding weight ratio is as follows:
Weight wheel type odometer =(δ Recurrence time length of wheel type odometerEncoder influencing factor )/2 (4)
Wherein delta Recurrence time length of wheel type odometer For coefficients related to the recursive duration of the wheel odometer calculation module,C 1 the constant coefficient of the recursive time length of the wheel type odometer is taken as 0.0429 and delta t in the embodiment 1 A recursive time interval for a wheel odometer; delta Encoder accuracy Delta as a coefficient related to encoder accuracy Encoder accuracy =C 2 *D Front-rear wheelbase +C 3 *r Radius of wheel ,C 2 For the constant coefficient of coding accuracy, 0.00137, D is taken in this embodiment Front-rear wheelbase C is the distance between the front axle and the rear axle 3 For the encoder precision constant coefficient, the value of r is 0.0264 in this embodiment Radius of wheel The radius of the wheels is four wheels.
(3) The laser odometer pose is calculated, the pose of the relative position is calculated through matching among real-time continuous laser frames, the relative positioning process is realized, the recursion pose calculation result is linearly related to the recursion time length of the laser odometer and the number of the environmental laser feature points, and the corresponding weight ratio is as follows:
Weight laser odometer =(δ Recursion time of laser odometerFeature points )/2 (5)
Wherein delta Recursion time of laser odometer For coefficients related to the recursive duration of the laser odometer calculation module,C 4 for the laser odometer recurrence time constant coefficient, the driving constant 0.8285, Δt in this embodiment 2 A recursive time interval for the laser odometer; delta Feature points As the coefficient related to the feature point number,C 5 for the characteristic point constant value coefficient, the constant value 0.0726, n is taken in the embodiment Laser point cloud total The total number of laser frame point clouds is calculated for the laser milestones; n is n Matched feature points The number of laser point clouds matched between adjacent frames.
(4) The visual odometer calculation module is used for positioning and determining the pose, stereo image matching and calculation of the vehicle position and pose are completed through image information acquired by a visual sensor, a recursive pose calculation result is linearly related to the recursive time length and the interframe matching degree of the visual odometer, and the corresponding weight ratio is as follows:
Weight visual odometer =(δ Recursion duration of visual odometerDegree of inter-frame matching )/2 (6)
Wherein delta Recursion duration of visual odometer For coefficients related to the recursive duration of the visual odometry calculation module,
C 6 for the constant value coefficient of the recursive duration of the visual odometer, the constant value 0.0942 and deltat are taken in the embodiment 3 A recursive time interval for a visual odometer; delta Degree of inter-frame matching As coefficients related to the degree of inter-frame matching,C 7 for the constant coefficient of the interframe matching degree of the visual odometer, the constant value of 0.1738 and x are taken in the embodiment i I=1, 2, … …, n, |x, for the position of the i-th feature point i -x i+1 And I is the Euclidean distance difference value of the matched visual feature points.
3. And normalizing the weight duty ratio to obtain the normalized weight duty ratio. The specific formula is as follows:
Weight Total =Weight GNSS +Weight wheel type odometer +Weight Laser odometer +Weight Visual odometer (7)
Weight GNSS_1 =Weight GNSS /Weight Total (8)
Weight Wheel type odometer_1 =Weight Wheel type odometer /Weight Total (9)
Weight Laser odometer_1 =Weight Laser odometer /Weight Total (10)
Weight Visual odometer_1 =Weight Visual odometer /Weight Total (11)
4. According to the weight duty ratio of each dead reckoning module after normalization processing, weighting and summing dead reckoning results of each dead reckoning module at the moment t, wherein the obtained results are taken as dead reckoning results of the mobile equipment at the moment t, namely the position and the posture of the laser frame at the moment t:
for example: under an outdoor open road scene, GNSS integrated navigation data are fixed solutions, namely when GNSS integrated navigation signals are free from shielding and the positioning error is lower than +/-0.05 cm, according to the calculation result, GNSS integrated navigation positions (x, y, z) and attitude information (Roll, pitch, yaw) are selected as the position and the attitude of a current laser frame; in a viaduct road scene, GNSS integrated navigation data are non-fixed solutions, but laser odometer data and visual odometer weight are increased, and then the position and posture data obtained by fusing the laser odometer data and the visual odometer are automatically fused to be used as the position posture of the current laser frame. It should be noted that, in both the two scenarios, the dead reckoning results of the several dead reckoning modules are ignored, because the weight ratio of the ignored dead reckoning modules is about 0 (not actually 0), the weight ratio of the GNSS combined navigation positioning module in the first scenario is relatively large, and the weight ratio of the laser odometer reckoning module and the visual odometer reckoning module in the second scenario is relatively large.
5. And combining the laser frames and the position and posture thereof, performing certain rotation, scaling and translation, converting into a unified geodetic coordinate system, completing rapid splicing and fusion of all the laser frames and visual data, obtaining laser point cloud map data, calling the map as a multi-source fusion map, and finally outputting and storing the map as an encrypted private map file.
And thirdly, after the laser point cloud map is obtained, optimizing and processing the laser point cloud map (multi-source fusion map). Firstly, rapidly filtering the ground point cloud and the dynamic target object in the map data, then performing map blocking operation according to the size of a fixed grid, and finally outputting a high-precision positioning sub-map file and a map index file for vehicle positioning. The specific flow is shown in fig. 4, and includes:
1. in order to improve the accuracy of laser map matching and positioning in the subsequent automatic driving process, the obtained laser point cloud map needs to be processed. The amount of ground data in the laser point cloud map is large, and the accuracy of matching and positioning of the laser map is reduced due to the similarity characteristics, so that the ground point cloud needs to be rapidly removed. The flow is shown in fig. 5.
Firstly, because the laser equipment is arranged at a fixed position of the vehicle, the tangential scanning angle is parallel to the ground, and the point cloud lower than H0 is directly and rapidly filtered through the elevation threshold H0. And secondly, rapidly dividing the real ground point cloud through a ground fitting algorithm, and then removing the real ground point cloud from the point cloud map to complete rapid filtering of the ground point cloud.
2. In the process of data acquisition and map construction, a plurality of unnecessary phenomena such as shielding of dynamic targets such as vehicles, pedestrians and the like often occur, so that the map precision is greatly reduced, the realization of later map positioning is not facilitated, and the dynamic targets are required to be filtered. The flow is shown in fig. 6.
And repeating the same scene twice to obtain a semantic point cloud map A and a semantic point cloud map B respectively, and assuming that the same dynamic object appears in the same position in the twice-acquired scene with little probability. At this time, grid division is carried out on the map data, the effect of taking 1cm by default is optimal, laser point cloud data at two moments in the same Grid are compared, if one laser point cloud data contains the target object in the Grid and the other laser point cloud data does not contain the target object in the Grid, the target object is considered to be a dynamic target object, and the elimination and the filtration of dynamic target objects such as pedestrians, vehicles and the like in the data acquisition process are completed; if the grid in the two laser point cloud data contains the target object, the target object is considered to be a non-dynamic target object for reservation.
After reservation, firstly, carrying out grid division on a scene area, and carrying out relation mapping on grids and a semantic point cloud map. Traversing the number of point clouds of the semantic point cloud map A and the semantic point cloud map B in the sub-grids: if the point clouds in the two map sub-grids are empty, the sub-grids are empty; if the point clouds in the two map sub-grids are non-empty, one of the laser semantic point cloud maps is selected when the point cloud quantity consistency of the two maps in the sub-grids is higher than 90%, a grid map is newly built for point cloud storage, or a laser semantic map with fewer point clouds is selected when the point cloud quantity consistency of the two maps in the sub-grids is lower than or equal to 90%, and the grid map is newly built for point cloud storage.
3. The laser point cloud map is constructed to realize high-precision laser positioning for automatically driving the vehicle, if map data in the whole scene is read into a memory, the laser matching positioning is slow, so that a pose estimation module is blocked or crashed, and the pose estimation effect and precision are affected. Therefore, map segmentation is needed to reduce the search range of the laser matching positioning process and improve the positioning instantaneity, thereby improving the positioning accuracy.
Firstly, carrying out Grid processing on laser point cloud data according to Grid Size Grid, dividing the laser point cloud map data into n rows and m columns, and obtaining n grids with Num=n; then, filling the corresponding point cloud data into the corresponding small lattices; and finally, carrying out file private encryption storage on each Grid to generate positioning sub-image map data, and simultaneously generating point cloud coordinates including the id information of the associated high-precision sub-image, the ground coordinates of the lower left point, the ground coordinates of the upper right point and the upper left corner of the map data of the Grid block range circumscribed rectangle as map reference data.
Thus, the construction of the whole laser point cloud map can be completed. Firstly, acquiring sensing data of multi-source sensing measurement equipment of an automatic driving vehicle at the current moment; then, the synchronized initial fusion data are output through a time synchronization module and a coordinate transformation module, and the initial fusion data are input into different position and posture calculation modules to calculate and process the position and posture of a plurality of current devices; then, according to the influence factors influencing the precision of the dead reckoning modules, the weight duty ratio of the dead reckoning module dead reckoning results is distributed in a self-adaptive mode; finally, according to the weight duty ratio, carrying out normalization weighting on the results of each pose estimation module to obtain an average value, and outputting the position and pose results of the automatic driving vehicle at the current moment; and (3) carrying out coordinate rotation transformation and translation on the laser frame data at the moment according to a final position and posture result, and projecting the laser frame data to a map coordinate system for map construction. According to the invention, by fusing the results of the multiple position and attitude calculation modules, high-precision position and attitude data suitable for various automatic driving scenes is output, and then a high-precision and strong-robustness laser positioning point cloud map is constructed.
Device example:
an embodiment of a positioning map construction device of the present invention, as shown in fig. 7, includes a memory, a processor, and an internal bus, where the processor and the memory complete communication and data interaction with each other through the internal bus. The memory includes at least one software functional module stored in the memory, and the processor executes various functional applications and data processing by running the software program and the module stored in the memory, so as to implement a positioning map construction method in the method embodiment of the present invention, that is, the steps of the scheme in the method embodiment are implemented by the program to instruct relevant hardware, and the program executes the steps including the method embodiment.
The processor may be a microprocessor MCU, a programmable logic device FPGA, or other processing device. The memory is used for storing a program, and the processor executes the program after receiving the execution instruction.
The memory can be various memories for storing information by utilizing an electric energy mode, such as RAM, ROM and the like; various memories for storing information by using magnetic energy, such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, USB flash disk, etc.; various memories for optically storing information, such as CDs, DVDs, etc. Of course, there are other ways of memory, such as quantum memory, graphene memory, etc.

Claims (8)

1. The positioning map construction method is characterized by comprising the following steps of:
1) Acquiring surrounding environment information of mobile equipment at a certain moment;
2) Acquiring data synchronized by each position and orientation calculation module arranged on the mobile equipment, and carrying out corresponding processing to obtain position and orientation calculation results of each position and orientation calculation module on the mobile equipment at the moment; each position and pose calculation module comprises at least two of a GNSS combined navigation positioning module, a wheel type speedometer calculation module, a laser speedometer calculation module and a visual speedometer calculation module, and the position and pose calculation result comprises a position and a pose;
3) Calculating coefficients related to each influence factor according to influence factors influencing the pose estimation precision of each pose estimation module at the moment, solving the average value of each coefficient of a certain pose estimation module to obtain the weight duty ratio of the pose estimation module, further determining the weight duty ratio of each pose estimation module, and carrying out normalization processing;
the influence factors of the GNSS integrated navigation positioning module comprise satellite quantity, channel ratio, precision factor and covariance ratio, and the covariance ratio output by the GNSS integrated navigation positioning module is a fixed solution, a floating solution and a single-point solution, and the weight ratio of the GNSS integrated navigation positioning module corresponding to the three positioning states is sequentially reduced;
the influence factors of the wheel type mileage reckoning module comprise the recursive time length of the wheel type mileage reckoning module and the precision of an encoder, and a coefficient delta related to the recursive time length of the wheel type mileage reckoning module Recurrence time length of wheel type odometer And a coefficient delta related to encoder accuracy Encoder accuracy The method comprises the following steps of:δ encoder accuracy =C 2 *D Front-rear wheelbase +C 3 *r Radius of wheel ,C 1 For the constant coefficient of the recursive duration of the wheel type odometer, deltat 1 For the recursive time interval of the wheel odometer, C 2 For coding precision constant coefficient, D Front-rear wheelbase C is the distance between the front axle and the rear axle 3 R is the constant coefficient of the encoder precision Radius of wheel Wheel radius for four wheels;
the influence factors of the laser mileage reckoning module comprise the recursive time length and the feature point number of the laser mileage reckoning module, and the coefficient delta related to the recursive time length of the laser mileage reckoning module Recursion time of laser odometer And a coefficient delta related to the feature points Feature points The method comprises the following steps of:C 5 is a constant coefficient of the number of feature points, n Laser point cloud total For the total number of laser frame point clouds for laser milemeter calculation, n Matched feature points The number of the laser point clouds matched between adjacent frames;
the influence factors of the visual milemeter calculation module comprise the recurrence time length and the interframe matching degree of the visual milemeter, and the coefficient delta related to the recurrence time length of the visual milemeter calculation module Recursion duration of visual odometer And a coefficient delta related to the degree of inter-frame matching Inter-frame matchingDegree of match The method comprises the following steps of:C 6 constant value coefficient, deltat, for recursive duration of visual odometer 3 For the recursive time interval of the visual odometer, C 7 Constant coefficient of interframe matching degree, x, for visual odometer i I=1, 2, … …, n, |x, for the position of the i-th feature point i -x i+1 The I is the Euclidean distance difference value of the matched visual feature points;
4) According to the weight duty ratio of each dead reckoning module after normalization processing, weighting and summing the dead reckoning results of each dead reckoning module at the moment, and taking the obtained result as the dead reckoning result of the mobile equipment at the moment;
5) And carrying out map construction according to the dead reckoning result of the mobile equipment at each moment and the surrounding environment information.
2. The positioning map construction method according to claim 1, wherein the weight ratio of the GNSS combined navigation positioning module is:
wherein Ratio is the covariance Ratio of the output result of the GNSS integrated navigation positioning and attitude determination module, and is calculated according to the covariance of the output result of the GNSS integrated navigation positioning and attitude determination module, and ratio=variance Suboptimal solution Variance/variance Optimal solution
3. The positioning map construction method according to claim 1 or 2, wherein in step 1), the surrounding information includes laser point cloud data of a surrounding or includes laser point cloud data and visual image data of a surrounding.
4. The method according to claim 1, wherein each of the dead reckoning modules includes a GNSS combined navigation and positioning module, and the dead reckoning result of the GNSS combined navigation and positioning module at the initial time is taken as the dead reckoning result of the mobile device at the initial time.
5. The method of constructing a map according to claim 3, wherein in step 5), after the map is constructed, the method further comprises the step of filtering the ground point cloud in the map according to the elevation of the laser point cloud in the map:
judging whether the elevation of the laser point cloud is larger than an elevation threshold value: if the elevation threshold value is greater than or equal to the elevation threshold value, the laser point cloud is a non-ground point cloud, and the laser point cloud data is reserved; if the laser point cloud is smaller than the elevation threshold value, the laser point cloud is the ground point cloud, and the laser point cloud data is filtered.
6. A positioning map construction method according to claim 3, wherein in step 5), after the map is constructed, the method further comprises the step of filtering dynamic objects in the map:
local maps of the same scene at different moments are obtained, grid division is carried out on each local map, and laser point cloud data of the same grid at different moments are compared: the grid contains a target object at different moments, and the target object is determined to be a non-dynamic target object, and the target object is reserved; and if the grid does not contain the target at least at one moment and the grid contains the target at least at one moment, determining the target as a dynamic target, and filtering the target.
7. A positioning map construction method according to claim 3, wherein in step 5), after the map is constructed, the method further comprises the step of raster-processing the map:
dividing the constructed map into n x m grids, filling the corresponding laser point cloud data into the corresponding grids, forming a sub-map by each grid, and storing, wherein n is more than or equal to 1, and m is more than or equal to 1.
8. A location map construction apparatus comprising a memory and a processor for executing instructions stored in the memory to implement the location map construction method according to any one of claims 1 to 7.
CN202011298342.8A 2020-11-18 2020-11-18 Positioning map construction method and device Active CN114518108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011298342.8A CN114518108B (en) 2020-11-18 2020-11-18 Positioning map construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011298342.8A CN114518108B (en) 2020-11-18 2020-11-18 Positioning map construction method and device

Publications (2)

Publication Number Publication Date
CN114518108A CN114518108A (en) 2022-05-20
CN114518108B true CN114518108B (en) 2023-09-08

Family

ID=81594449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011298342.8A Active CN114518108B (en) 2020-11-18 2020-11-18 Positioning map construction method and device

Country Status (1)

Country Link
CN (1) CN114518108B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104854428A (en) * 2013-12-10 2015-08-19 深圳市大疆创新科技有限公司 Sensor fusion
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN107918386A (en) * 2017-10-25 2018-04-17 北京汽车集团有限公司 Multi-Sensor Information Fusion Approach, device and vehicle for vehicle
CN108181636A (en) * 2018-01-12 2018-06-19 中国矿业大学 Petrochemical factory's crusing robot environmental modeling and map structuring device and method
CN108254758A (en) * 2017-12-25 2018-07-06 清华大学苏州汽车研究院(吴江) Three-dimensional road construction method based on multi-line laser radar and GPS
CN108345008A (en) * 2017-01-23 2018-07-31 郑州宇通客车股份有限公司 A kind of target object detecting method, point cloud data extracting method and device
CN108664841A (en) * 2017-03-27 2018-10-16 郑州宇通客车股份有限公司 A kind of sound state object recognition methods and device based on laser point cloud
CN109697710A (en) * 2017-10-21 2019-04-30 江苏华扬信息科技有限公司 A kind of filtering processing algorithm based on layering grid
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110430534A (en) * 2019-09-12 2019-11-08 北京云迹科技有限公司 A kind of positioning selection method, device, electronic equipment and storage medium
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN111812698A (en) * 2020-07-03 2020-10-23 北京图森未来科技有限公司 Positioning method, device, medium and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018067473A1 (en) * 2016-10-03 2018-04-12 Agjunction Llc Using optical sensors to resolve vehicle heading issues

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104854428A (en) * 2013-12-10 2015-08-19 深圳市大疆创新科技有限公司 Sensor fusion
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN108345008A (en) * 2017-01-23 2018-07-31 郑州宇通客车股份有限公司 A kind of target object detecting method, point cloud data extracting method and device
CN108664841A (en) * 2017-03-27 2018-10-16 郑州宇通客车股份有限公司 A kind of sound state object recognition methods and device based on laser point cloud
CN109697710A (en) * 2017-10-21 2019-04-30 江苏华扬信息科技有限公司 A kind of filtering processing algorithm based on layering grid
CN107918386A (en) * 2017-10-25 2018-04-17 北京汽车集团有限公司 Multi-Sensor Information Fusion Approach, device and vehicle for vehicle
CN108254758A (en) * 2017-12-25 2018-07-06 清华大学苏州汽车研究院(吴江) Three-dimensional road construction method based on multi-line laser radar and GPS
CN108181636A (en) * 2018-01-12 2018-06-19 中国矿业大学 Petrochemical factory's crusing robot environmental modeling and map structuring device and method
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110430534A (en) * 2019-09-12 2019-11-08 北京云迹科技有限公司 A kind of positioning selection method, device, electronic equipment and storage medium
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN111812698A (en) * 2020-07-03 2020-10-23 北京图森未来科技有限公司 Positioning method, device, medium and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卡尔曼滤波模型的多传感器数据融合导航定位建模与仿真;宋之卉 等;《数字通信世界》;62-63 *

Also Published As

Publication number Publication date
CN114518108A (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN111272165B (en) Intelligent vehicle positioning method based on characteristic point calibration
CN109631887B (en) Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
WO2022007776A1 (en) Vehicle positioning method and apparatus for target scene region, device and storage medium
CN102042835B (en) Autonomous underwater vehicle combined navigation system
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN112162297B (en) Method for eliminating dynamic obstacle artifacts in laser point cloud map
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
AU2022201558A1 (en) Pose estimation method and device, related equipment and storage medium
EP4124829B1 (en) Map construction method, apparatus, device and storage medium
Oniga et al. Curb detection for driving assistance systems: A cubic spline-based approach
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN111860072A (en) Parking control method and device, computer equipment and computer readable storage medium
CN116997771A (en) Vehicle, positioning method, device, equipment and computer readable storage medium thereof
CN114485698A (en) Intersection guide line generating method and system
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN111829514A (en) Road surface working condition pre-aiming method suitable for vehicle chassis integrated control
CN113515128B (en) Unmanned vehicle real-time path planning method and storage medium
CN113240813B (en) Three-dimensional point cloud information determining method and device
CN114518108B (en) Positioning map construction method and device
CN113155126A (en) Multi-machine cooperative target high-precision positioning system and method based on visual navigation
CN114459474B (en) Inertial/polarization/radar/optical-fluidic combined navigation method based on factor graph
CN113403942B (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN115496873A (en) Monocular vision-based large-scene lane mapping method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 6, Yutong Road, Guancheng Hui District, Zhengzhou, Henan 450061

Applicant after: Yutong Bus Co.,Ltd.

Address before: No.1, Shibali Heyu Road, Guancheng Hui District, Zhengzhou City, Henan Province

Applicant before: ZHENGZHOU YUTONG BUS Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant