CN110631593A - Multi-sensor fusion positioning method for automatic driving scene - Google Patents
Multi-sensor fusion positioning method for automatic driving scene Download PDFInfo
- Publication number
- CN110631593A CN110631593A CN201911165058.0A CN201911165058A CN110631593A CN 110631593 A CN110631593 A CN 110631593A CN 201911165058 A CN201911165058 A CN 201911165058A CN 110631593 A CN110631593 A CN 110631593A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- information
- yaw
- particle
- lane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
Abstract
The invention discloses a multi-sensor fusion positioning method for an automatic driving scene, which adopts a low-cost sensor and a vector map to realize lane-level positioning through an improved particle filter algorithm. The invention has obvious price advantage, is beneficial to popularization of automatic driving technology, can ensure positioning precision, is more convenient to use, can output high-frequency positioning information with adjustable frequency, and can provide reference data for environment perception and vehicle body control.
Description
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to a multi-sensor fusion positioning method for an automatic driving scene.
Background
High-precision positioning is an important research subject in the field of automatic driving, lane-level positioning is realized by combining sensors such as high-precision integrated navigation, multi-line laser radar and camera shooting with a high-precision map, and fusion positioning algorithms such as Kalman filtering, particle filtering and SLAM (instant positioning and map construction) are mainly adopted at present. The conventional Kalman filtering algorithm needs expensive high-precision RTK (real-time kinematic measurement) and IMU (inertial measurement unit), and the positioning precision is low in places where GPS signals are unstable, such as viaducts, tunnels and the like; the existing particle filter algorithm needs to detect road signs by means of a deep learning module, match the road signs with road characteristics in a prior map and update model parameters; the SLAM algorithm is less robust on motorways where vehicles run fast, and in addition it requires a high performance computing platform. Therefore, there is no economical and precise positioning method suitable for automatic driving of vehicles in the prior art.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a multi-sensor fusion positioning method for an automatic driving scene, which is economical and can accurately position a vehicle.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a multi-sensor fusion positioning method for an automatic driving scene, comprising the following steps:
step 1, a vehicle-mounted sensor collects the driving information of a vehicle in real time; the driving information of the vehicle comprises longitude and latitude of the vehicle, speed information of the vehicle, course information, a lane where the vehicle is located and a distance between the vehicle and a center line of the lane where the vehicle is located;
step 2, on a vector map, taking the longitude and latitude where the vehicle acquired in the step 1 is as the center of a circle, and taking GPS positioning deviation as the radius to make a circle; and arranging particle swarms in the circle according to Gaussian distribution; the vector map comprises information of lane lines, lane width and lane course angle;
step 3, adding Gaussian noise into the course information and the speed information acquired by the sensor and the position information of each particle in the particle swarm set in the step 2, and inputting the mixture into a first constant rate-of-rotation and speed operation model, wherein the first constant rate-of-rotation and speed model outputs the state information of each particle, and the state value of each particle comprises the coordinate value of each particle in the UTM coordinate system and the course information of each particle;
step 4, setting the weight value of the particles which are not in the lane where the vehicle is located to be 0; respectively calculating the weight values of the remaining particle points;
and 5, obtaining the position information of the vehicle by a weighted average method according to the state information of each particle obtained in the step 3 and the weight value of each particle obtained in the step 4.
Further, in the step 1, a plurality of sensors are adopted, and each sensor has a different data source, so that a low-cost sensor can be selected.
Further, the GPS positioning deviation in the step 2 passes through a formulaAnd calculating to obtain the result, wherein,λin order to determine the position deviation of the GPS,ηindicating the accuracy of the GPS position fix,θin order to receive the number of stars,hin order to be the horizontal precision factor,βthe value range of (a) is 0.55 ~ 0.65.65,σin order to stabilize the coefficient of heat transfer,μis a horizontal precision coefficient. The robustness of the whole method is effectively ensured.
Further, the method for obtaining the weight value of the particle point in step 4 includes the following steps:
step 401, according to the formula:
Δd i =d c -d i p ;
Δyaw i =yaw c + yaw i r - yaw i p ;
respectively calculating a position difference value and a course difference value of each particle in the particle swarm in the lane; wherein the content of the first and second substances,Δd i is shown asiThe difference in the position of the individual particles,d c indicating that Camera outputs the distance deviation of the current vehicle from the center line of the lane,d i p is shown asiThe distance deviation of individual particles from the lane center line,Δyaw i is shown asiThe difference of the course angles of the individual particles,yaw c indicates the deviation of the current vehicle from the heading angle of the lane output by Camera,yaw i r is shown asiThe heading angle of the road in which the individual particle is located,yaw i p is shown asiThe heading angle of each particle.
Step 402, substituting the position difference value and the course difference value of each particle in the lane obtained in the step 401 into a probability density function, and obtaining the weight value of each particle point after normalizationw i ;
Wherein the content of the first and second substances,w i is as followsiThe weight of each of the particles is determined,σ d represents the variance of the distance deviation of the Camera detected vehicle from the lane center line,u d represents the mean of the range deviations of the Camera detected vehicle from the lane center line,σ yaw represents the variance of the deviation of the Camera detected vehicle from the heading angle of the lane,u yaw mean values representing the deviation of the Camera detected vehicle from the heading angle of the lane.
Further, the system also comprises a high-frequency module, wherein the position information of the vehicle, the real-time vehicle speed information and the vehicle heading information obtained in the step 5 are input into the high-frequency module, and the high-frequency module outputs the vehicle position information; the high-frequency module calculates the position information of the vehicle through a constant rotation rate and speed model. The high-frequency positioning information with adjustable frequency can be output.
Further, the high frequency module operation includes the steps of:
step 601, inputting the position information of the vehicle obtained in the step 5, the currently collected vehicle speed information and the vehicle heading information into a second constant slew rate and speed model to calculate the position information of the vehicle at the next momentx t ,y t ,yaw t And outputting, wherein the second constant rotation rate and speed model is:
yaw t =yaw t-1 +yaw ’ vt-1 ×△t
x t = x t-1 +v’ t ×cos(yaw t ) ×△t
y t = y t-1 +v’ t ×sin(yaw t ) ×△t
in the formula (I), the compound is shown in the specification,yaw t to representtThe heading angle of the vehicle at that moment,yaw t-1 to representt-The heading angle of the vehicle at time 1,yaw ’ vt-1 is shown int-angular velocity of the heading angle of the vehicle output by the IMU at time 1,x t to representtThe abscissa of the vehicle in the UTM coordinate system at the time,x t-1 to representt-the abscissa of the vehicle in the UTM coordinate system at time 1,v’ t in thattThe speed of the vehicle outputted by the vehicle ODOM (odometer) at the time, y t to representtThe ordinate of the vehicle at the time in the UTM coordinate system,y t-1 to representt-the ordinate of the vehicle at time 1 in the UTM coordinate system;
step 602, detecting whether new vehicle speed information and vehicle heading information are acquired; if the new vehicle speed information and the vehicle heading information are collected, executing step 603, and if the new vehicle speed information and the vehicle heading information are not collected, executing step 602;
step 603, detecting whether the position information of the new vehicle is output in step 5, if not, obtaining the position information of the vehicle by using the step 601x t ,y t ,yaw t Combining the new vehicle speed information collected at the moment and the course information of the vehicle as input data, inputting the input data into a second constant rotation rate and speed model, and calculating the position information of the vehicle at the next momentx t ,y t ,yaw t And outputs it, and then repeats step 602, and if the position information of the new vehicle is output in step 5, repeats step 601 ~ 602.
The working principle is as follows: the positioning scheme aiming at the automatic driving scene is provided, and lane-level positioning is realized by combining low-cost sensors such as a GPS (global positioning system), an IMU (inertial measurement unit), a Camera and the like with a vector map through an improved particle filter algorithm. Hardware equipment and vector map data used in the scheme are common resources of an automatic driving technology, and accurate lane-level positioning can be achieved without expensive sensing equipment and a computing platform.
Has the advantages that: compared with the prior art, the invention has the following remarkable progress:
1. the GPS and IMU adopted by the invention are low-cost sensing devices, have obvious price advantage and are beneficial to popularization of an automatic driving technology.
2. The invention fuses and positions the absolute position information of the GPS (global positioning system) of the sensor necessary for the automatic driving vehicle and the road perception information of the Camera, thereby effectively ensuring the positioning precision.
3. According to the invention, the constraint relation between the GPS absolute deviation and the lane transverse positioning is established through the particle swarm, the lane-level positioning is realized, the same-level positioning precision can be realized without a target detection platform such as deep learning, and the like, and the use is more convenient. Meanwhile, Gaussian noise is added into the whole particle swarm and used for simulating real conditions, input data of the algorithm contain uncertainty of the system, and after the particle weight is updated through the observation module, the output of the particle filter algorithm is more accurate.
4. According to the invention, the positioning information output by particle filtering in the text is combined with a vehicle prediction model, so that high-frequency positioning information with adjustable frequency can be output, and reference data can be provided for environment perception and vehicle body control.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a schematic diagram of a particle swarm setting range on a vector map;
FIG. 3 is a schematic diagram of particle swarm arrangement on a vector map;
FIG. 4 is a schematic diagram of a particle swarm screened according to visual lane detection information;
fig. 5 is a diagram illustrating the positioning result.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
the multi-sensor fusion positioning method for the automatic driving scene disclosed by the embodiment specifically comprises the following steps:
step 1, respectively acquiring real-time information of an automobile by a GPS (global positioning system), an IMU (inertial measurement unit), a Camera and an ODOM (vehicle optical disk object) which are arranged on the automobile, wherein the GPS is arranged on the roof and positioned at the rotation center of the automobile, the IMU is arranged at the rotation center of the automobile, and the Camera is arranged on a front windshield positioned on the central axis of the automobile. The GPS mainly collects longitude and latitude information of the position of the vehicle, and the RAC-P1 type GPS is adopted in the embodiment; the IMU collects the course information of the vehicle, the course information comprises a course angle of the vehicle and the angular speed of the course angle, and the IMU with the MTI-30 model is adopted in the embodiment; the Camera collects lane detection information, wherein the lane detection information comprises a specific lane where the vehicle is located, the distance between the vehicle and the center line of the lane in the lane and the deviation between the vehicle and the heading angle of the lane, and the Camera of model MV-UBS131GC is adopted in the embodiment; the vehicle ODOM collects speed information of the vehicle.
GPS output information in this example: longitude: 118.974608, dimension: 31.708664, star number: 12, horizontal precision factor: 0.66. UTM coordinate system: east (x axis): 687116.344, north (y-axis): 3509839.137. positioning deviation: 1.8 m. Heading angle of output of IMU: 90 DEG, IMU angular velocity 0.1 DEG/s. The speed of the vehicle is 5 m/s. Information detected by the camera: the distance from the center line is 0.3m, and the angle between the vehicle and the road is 2 degrees.
In the step 2, the step of mixing the raw materials,according to the formulaA positioning deviation is calculated, wherein,λin order to determine the position deviation of the GPS,ηindicating the accuracy of the GPS position fix,θin order to receive the number of stars,hin order to be the horizontal precision factor,βthe value range of (a) is 0.55 ~ 0.65.65,σin order to stabilize the coefficient of heat transfer,μis a horizontal precision coefficient.
And 3, taking the longitude and latitude where the vehicle acquired in the step 1 is located as the center of a circle on the vector map, and obtaining the GPS positioning deviation in the step 2λMaking a circle with a radius; and sets up a population of particles within the circle according to a gaussian distribution. The vector map is in a UTM coordinate system and contains road information such as lane lines, lane widths, lane heading angles and the like.
In this embodiment, only 5 particle points are set for easy understanding, and the number of particles in a particle group generally set in the method provided by the present invention is more than 1000, so that the more particle data set, the more accurate the obtained positioning is. The initial position of each particle in this example is shown in table one:
watch 1
Step 4, adding gaussian noise to the heading information output by the IMU, the speed information output by the vehicle ODOM (odometer) and the position information of each particle in the particle swarm set in the step 3, and inputting the mixture into a first CTRV (constant slew rate and speed model) operation model, wherein the first CTRV operation model outputs the state information of each particle, the state information of each particle comprises a coordinate value of each particle in the UTM coordinate system and the heading information of each particle, and the first CTRV operation model is as follows:
yaw i t =yaw i t-1 +yaw ’ vt-1 ×△t
x i t = x i t-1 +v’ t ×cos(yaw i t ) ×△t
y i t = y i t-1 +v’ t ×sin(yaw i t ) ×△t
in the formula (I), the compound is shown in the specification,yaw i t is shown asiA particle is arranged intThe course angle of the moment in time,yaw i t-1 is shown asiA particle is arranged int-a heading angle at time 1,yaw ’ vt-1 is shown int-the angular velocity of the heading angle of the vehicle, output by the IMU at the moment 1, incorporates the value of gaussian noise, ΔtThe time difference is represented by the difference in time,x i t is shown asiA particle is arranged intThe abscissa of the time in the UTM coordinate system,x i t-1 is shown asiA particle is arranged int-The abscissa of time 1 in the UTM coordinate system,v t is shown intThe speed of the vehicle outputted from the vehicle ODOM (odometer) at the time adds to the value after the gaussian noise,y i t is shown asiA particle is arranged intThe ordinate of the time in the UTM coordinate system,y i t-1 is shown asiA particle is arranged int-The ordinate of time 1 in the UTM coordinate system. When the position information of the particles is used in the initial state of the CTRV running model, gaussian noise needs to be added to the abscissa and ordinate values of each particle.
In this embodiment, the state information of each particle output by the first CTRV running model is shown in table two:
watch two
Step 5, according to the vehicle Camera acquisition lane detection information, setting the weight value of the particle point which is not in the lane where the vehicle is located as 0; and respectively calculating the weight values of the remaining particle points. The method for calculating the weight value of the particle point comprises the following steps:
step 501, according to the formula:
Δd i =d c -d i p ;
Δyaw i =yaw c + yaw i r - yaw i p ;
respectively calculating a position difference value and a course difference value of each particle in the particle swarm in the lane; wherein the content of the first and second substances,Δd i is shown asiThe difference in the position of the individual particles,d c indicating that Camera outputs the distance deviation of the current vehicle from the center line of the lane,d i p is shown asiThe distance deviation of individual particles from the lane center line,Δyaw i is shown asiThe difference of the course angles of the individual particles,yaw c indicates the deviation of the current vehicle from the heading angle of the lane output by Camera,yaw i r is shown asiThe heading angle of the road in which the individual particle is located,yaw i p is shown asiThe heading angle of each particle.
Step 502, substituting the position difference value and the course difference value of each particle in the lane obtained in the step 501 into a probability density function, and obtaining the weight value of each particle point after normalizationw i ;
Wherein the content of the first and second substances,w i is as followsiThe weight of each of the particles is determined,σ d represents the variance of the distance deviation of the Camera detected vehicle from the lane center line,u d represents the mean of the range deviations of the Camera detected vehicle from the lane center line,σ yaw represents CamThe era detects the variance of the deviation of the heading angle of the vehicle from the lane,u yaw mean values representing the deviation of the Camera detected vehicle from the heading angle of the lane.
In this embodiment, the position deviation, the heading deviation, and the weight value of each particle are shown in table three:
watch III
And 6, respectively obtaining the state information of each particle obtained in the step 4 and the weight value of each particle obtained in the step 5, and obtaining the position information of the vehicle by a weighted average method. The obtained position information of the vehicle comprises an abscissa, an ordinate and a heading angle of the vehicle in the UTM coordinate system. The finally obtained position information of the current vehicle is as follows:
x coordinate 687116.539
Y coordinate 3509844.203
Course angle 89.119 °
Speed 4.947m/s
Example 2:
in this embodiment, the meaning of the characters in each formula is mentioned in the summary of the invention and embodiment 1, and therefore, the description thereof is omitted. As shown in fig. 1, the multi-sensor fusion positioning method for an automatic driving scene disclosed in this embodiment specifically includes the following steps:
step 1, respectively acquiring real-time information of an automobile by a GPS (global positioning system), an IMU (inertial measurement unit), a Camera and an ODOM (vehicle optical disk object) which are arranged on the automobile, wherein the GPS is arranged on the roof and positioned at the rotation center of the automobile, the IMU is arranged at the rotation center of the automobile, and the Camera is arranged on a front windshield positioned on the central axis of the automobile. The GPS mainly collects longitude and latitude information of the position of the vehicle, and the RAC-P1 type GPS is adopted in the embodiment; the IMU collects the course information of the vehicle, the course information comprises a course angle of the vehicle and the angular speed of the course angle, and the IMU with the MTI-30 model is adopted in the embodiment; the Camera collects lane detection information, wherein the lane detection information comprises a specific lane where the vehicle is located, the distance between the vehicle and the center line of the lane in the lane and the deviation between the vehicle and the heading angle of the lane, and the Camera of model MV-UBS131GC is adopted in the embodiment; the vehicle ODOM collects speed information of the vehicle.
And 3, taking the position of the vehicle acquired in the step 1 as the center of a circle on the vector map, and obtaining the GPS positioning deviation in the step 2λMaking a circle with a radius; as shown in fig. 2, three parallel broken lines and two solid lines parallel thereto are lane lines of the vector map; the white rectangle is an automatic driving vehicle; the sector dotted line is a visual lane detection result, and accordingly a lane where the vehicle is located can be obtained; the gray circular area is the positioning information of the GPS, and the circular radius is the positioning deviation of the GPS. And arranging particle swarms in the circle according to Gaussian distribution; as shown in fig. 3, the solid black dots are the disposed particle dots. The vector map contains road information such as lane lines, lane width, lane course angle and the like.
Step 4, adding gaussian noise to the heading information output by the IMU, the speed information output by the vehicle ODOM (odometer) and the position information of each particle in the particle swarm set in the step 3, and inputting the mixture into a first CTRV (constant slew rate and speed model) operation model, wherein the first CTRV operation model outputs the state information of each particle, the state information of each particle comprises a coordinate value of each particle in the UTM coordinate system and the heading information of each particle, and the first CTRV operation model is as follows:
yaw i t =yaw i t-1 +yaw ’ vt-1 ×△t
x i t = x i t-1 +v’ t ×cos(yaw i t ) ×△t
y i t = y i t-1 +v’ t ×sin(yaw i t ) ×△t
when the position information of the particles is used in the initial state of the CTRV running model, gaussian noise needs to be added to the abscissa and ordinate values of each particle.
Step 5, according to the vehicle Camera collected lane detection information, setting the weight value of the particle point not in the lane where the vehicle is located to be 0, and removing the particles outside the lane where the vehicle is located as shown in fig. 4; and respectively calculating the weight values of the remaining particle points. The method for calculating the weight value of the particle point comprises the following steps:
step 501, respectively calculating a position difference value and a course difference value of each particle in the particle swarm in a lane;
Δd i =d c -d i p ;
Δyaw i =yaw c + yaw i r - yaw i p ;
step 502: substituting the position difference and the course difference of each particle in the lane obtained in the step 501 into a probability density function, and obtaining the weight value of each particle point after normalizationw i ;
And 6, calculating the position information of the vehicle by using the state information of each particle obtained in the step 4 and the weight value of each particle obtained in the step 5 through a weighted average method, wherein the calculated position information of the vehicle comprises an abscissa, an ordinate and a course angle of the vehicle in the UTM coordinate system. As shown in fig. 5.
Step 7, inputting the position information of the vehicle obtained in the step 6, the speed information acquired by the vehicle ODOM and the heading information of the vehicle acquired by the IMU into a high-frequency module, and outputting the position information of the vehicle by the high-frequency module; the high-frequency module calculates the position information of the vehicle through a CTRV model, and specifically comprises the following steps:
step 701, inputting the position information of the vehicle obtained in the step 6, the currently acquired vehicle speed information and the vehicle heading information into a second CTRV running model to calculate the position information of the vehicle at the next momentx t ,y t ,yaw t And outputting, wherein the second CTRV operation model is as follows:
yaw t =yaw t-1 +yaw ’ vt-1 ×△t
x t = x t-1 +v’ t ×cos(yaw t ) ×△t
y t = y t-1 +v’ t ×sin(yaw t ) ×△t
in the formula (I), the compound is shown in the specification,yaw t to representtThe course angle of the vehicle at the moment is output data of the second CTRV operation model;yaw t-1 to representt-1, the course angle of the vehicle at the moment is input data of a second CTRV operation model;yaw ’ vt-1 is shown int-angular velocity of the heading angle of the vehicle output by the IMU at time 1, which is input data to the second CTRV operational model;x t to representtThe abscissa of the vehicle under the UTM coordinate system at the moment is output data of the second CTRV operation model;x t-1 to representt-the abscissa of the vehicle in the UTM coordinate system at time 1, when new vehicle position information is obtained in step 6,x t-1 directly using the abscissa of the vehicle position obtained in the step 6, and if new vehicle position information is not obtained, directly using the abscissa of the vehicle output by the second CTRV operation model obtained at the previous moment for iteration, wherein the abscissa is input data of the second CTRV operation model;v’ t in thattSpeed of vehicle output by vehicle ODOM (odometer) at time of day, which is input to second CTRV running modelData;y t to representtThe ordinate of the vehicle under the UTM coordinate system at the moment is output data of the second CTRV operation model;y t-1 to representt-the ordinate of the vehicle in the UTM coordinate system at time 1, when new vehicle position information is obtained in step 6,y t-1 and (4) directly using the vertical coordinate of the vehicle position obtained in the step (6), and if no new vehicle position information is obtained, directly using the vertical coordinate of the vehicle output by the second CTRV operation model obtained at the last moment for iteration, wherein the vertical coordinate is input data of the second CTRV operation model.
Step 702, detecting whether new vehicle speed information and vehicle heading information are acquired; if the new vehicle speed information and the heading information of the vehicle are collected, step 703 is performed, and if the new vehicle speed information and the heading information of the vehicle are not collected, step 702 is performed.
Step 703 of detecting whether or not there is any output new position information of the vehicle in step 6, and if not, obtaining the position information of the vehicle using step 701x t ,y t ,yaw t Inputting the new vehicle speed information and the vehicle course information collected at the moment into a second CTRV operation model in combination with input data to calculate the position information of the vehicle at the next momentx t ,y t ,yaw t And outputs it, and then repeats step 702, and if the position information of the new vehicle is output in step 6, repeats step 701 ~ 702.
Because the frequency of acquiring the lane detection information by Camera is 10HZ, and the acquisition frequency of IMU and ODOM is 50HZ, the output frequency of the whole system to the vehicle position information can be effectively improved by adding the high-frequency module.
Claims (6)
1. A multi-sensor fusion positioning method for an automatic driving scene is characterized by comprising the following steps:
step 1, a vehicle-mounted sensor collects the driving information of a vehicle in real time; the driving information of the vehicle comprises longitude and latitude of the vehicle, speed information of the vehicle, course information, a lane where the vehicle is located and a distance between the vehicle and a center line of the lane where the vehicle is located;
step 2, on a vector map, taking the longitude and latitude where the vehicle acquired in the step 1 is as the center of a circle, and taking GPS positioning deviation as the radius to make a circle; and arranging particle swarms in the circle according to Gaussian distribution; the vector map comprises information of lane lines, lane width and lane course angle;
step 3, adding Gaussian noise into the course information and the speed information acquired by the sensor and the position information of each particle in the particle swarm set in the step 2, and inputting the mixture into a first constant rate-of-rotation and speed operation model, wherein the first constant rate-of-rotation and speed model outputs the state information of each particle, and the state value of each particle comprises the coordinate value of each particle in the UTM coordinate system and the course information of each particle;
step 4, setting the weight value of the particles which are not in the lane where the vehicle is located to be 0; respectively calculating the weight values of the remaining particle points;
and 5, obtaining the position information of the vehicle by a weighted average method according to the state information of each particle obtained in the step 3 and the weight value of each particle obtained in the step 4.
2. The multi-sensor fusion localization method for autonomous driving scenarios of claim 1, characterized in that: in the step 1, a plurality of sensors are adopted, and the data source of each sensor is different.
3. The multi-sensor fusion localization method for autonomous driving scenarios of claim 1, characterized in that: the GPS positioning deviation in the step 2 passes through a formulaAnd calculating to obtain the result, wherein,λin order to determine the position deviation of the GPS,ηindicating the accuracy of the GPS position fix,θin order to receive the number of stars,hin order to be the horizontal precision factor,βthe value range of (a) is 0.55 ~ 0.65.65,σin order to stabilize the coefficient of heat transfer,μis a horizontal precision coefficient.
4. The multi-sensor fusion localization method for autonomous driving scenarios of claim 1, characterized in that: the method for obtaining the weight value of the particle point in the step 4 comprises the following steps:
step 401, according to the formula:
Δd i =d c -d i p ;
Δyaw i =yaw c + yaw i r - yaw i p ;
respectively calculating a position difference value and a course difference value of each particle in the particle swarm in the lane; wherein the content of the first and second substances,Δd i is shown asiThe difference in the position of the individual particles,d c indicating that Camera outputs the distance deviation of the current vehicle from the center line of the lane,d i p is shown asiThe distance deviation of individual particles from the lane center line,Δyaw i is shown asiThe difference of the course angles of the individual particles,yaw c indicates the deviation of the current vehicle from the heading angle of the lane output by Camera,yaw i r is shown asiThe heading angle of the road in which the individual particle is located,yaw i p is shown asiA course angle of the individual particle;
step 402, substituting the position difference value and the course difference value of each particle in the lane obtained in the step 401 into a probability density function, and obtaining the weight value of each particle point after normalizationw i ;
Wherein the content of the first and second substances,w i is as followsiThe weight of each of the particles is determined,σ d indicating deviation of Camera-detected vehicle distance from lane center lineThe variance of the measured values is calculated,u d represents the mean of the range deviations of the Camera detected vehicle from the lane center line,σ yaw represents the variance of the deviation of the Camera detected vehicle from the heading angle of the lane,u yaw mean values representing the deviation of the Camera detected vehicle from the heading angle of the lane.
5. The multi-sensor fusion localization method for autonomous driving scenarios of claim 1, characterized in that: the high-frequency module is used for inputting the position information of the vehicle, the real-time vehicle speed information and the vehicle heading information obtained in the step 5 into the high-frequency module, and the high-frequency module outputs the vehicle position information; and the high-frequency module calculates the position information of the vehicle through a second constant rotation rate and speed model.
6. The multi-sensor fusion localization method for autonomous driving scenarios of claim 5, characterized in that: the high frequency module operation includes the steps of:
step 601, inputting the position information of the vehicle obtained in the step 5, the currently collected vehicle speed information and the vehicle heading information into a second constant slew rate and speed model to calculate the position information of the vehicle at the next momentx t ,y t ,yaw t And outputting, wherein the second constant rotation rate and speed model is:
yaw t =yaw t-1 +yaw ’ vt-1 ×△t
x t = x t-1 +v’ t ×cos(yaw t ) ×△t
y t = y t-1 +v’ t ×sin(yaw t ) ×△t
in the formula (I), the compound is shown in the specification,yaw t to representtThe heading angle of the vehicle at that moment,yaw t-1 to representt-The heading angle of the vehicle at time 1,yaw ’ vt-1 is shown int-angular velocity of the heading angle of the vehicle output by the IMU at time 1,x t to representtThe abscissa of the vehicle in the UTM coordinate system at the time,x t-1 to representt-the abscissa of the vehicle in the UTM coordinate system at time 1,v’ t in thattThe speed of the vehicle output by the vehicle odometer at the time, y t to representtThe ordinate of the vehicle at the time in the UTM coordinate system,y t-1 to representt-the ordinate of the vehicle at time 1 in the UTM coordinate system;
step 602, detecting whether new vehicle speed information and vehicle heading information are acquired; if the new vehicle speed information and the vehicle heading information are collected, executing step 603, and if the new vehicle speed information and the vehicle heading information are not collected, executing step 602;
step 603, detecting whether the position information of the new vehicle is output in step 5, if not, obtaining the position information of the vehicle by using the step 601x t ,y t ,yaw t Combining the new vehicle speed information collected at the moment and the course information of the vehicle as input data, inputting the input data into a second constant rotation rate and speed model, and calculating the position information of the vehicle at the next momentx t ,y t ,yaw t And outputs it, and then repeats step 602, and if the position information of the new vehicle is output in step 5, repeats step 601 ~ 602.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010168559.0A CN111307162B (en) | 2019-11-25 | 2019-11-25 | Multi-sensor fusion positioning method for automatic driving scene |
CN201911165058.0A CN110631593B (en) | 2019-11-25 | 2019-11-25 | Multi-sensor fusion positioning method for automatic driving scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911165058.0A CN110631593B (en) | 2019-11-25 | 2019-11-25 | Multi-sensor fusion positioning method for automatic driving scene |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010168559.0A Division CN111307162B (en) | 2019-11-25 | 2019-11-25 | Multi-sensor fusion positioning method for automatic driving scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110631593A true CN110631593A (en) | 2019-12-31 |
CN110631593B CN110631593B (en) | 2020-02-21 |
Family
ID=68979526
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911165058.0A Active CN110631593B (en) | 2019-11-25 | 2019-11-25 | Multi-sensor fusion positioning method for automatic driving scene |
CN202010168559.0A Active CN111307162B (en) | 2019-11-25 | 2019-11-25 | Multi-sensor fusion positioning method for automatic driving scene |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010168559.0A Active CN111307162B (en) | 2019-11-25 | 2019-11-25 | Multi-sensor fusion positioning method for automatic driving scene |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110631593B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111586632A (en) * | 2020-05-06 | 2020-08-25 | 浙江大学 | Cooperative neighbor vehicle positioning method based on communication sensing asynchronous data fusion |
CN111752286A (en) * | 2020-03-09 | 2020-10-09 | 西南科技大学 | Automatic mooring method for small unmanned ship |
CN111813127A (en) * | 2020-07-28 | 2020-10-23 | 丹阳市安悦信息技术有限公司 | Automatic automobile transfer robot system of driving formula |
CN112505718A (en) * | 2020-11-10 | 2021-03-16 | 奥特酷智能科技(南京)有限公司 | Positioning method, system and computer readable medium for autonomous vehicle |
CN113188539A (en) * | 2021-04-27 | 2021-07-30 | 深圳亿嘉和科技研发有限公司 | Combined positioning method of inspection robot |
CN113494912A (en) * | 2020-03-20 | 2021-10-12 | Abb瑞士股份有限公司 | Position estimation of a vehicle based on virtual sensor responses |
CN114323033A (en) * | 2021-12-29 | 2022-04-12 | 北京百度网讯科技有限公司 | Positioning method and device based on lane lines and feature points and automatic driving vehicle |
CN116222588A (en) * | 2023-05-08 | 2023-06-06 | 睿羿科技(山东)有限公司 | Positioning method for integrating GPS and vehicle-mounted odometer |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018176000A1 (en) | 2017-03-23 | 2018-09-27 | DeepScale, Inc. | Data synthesis for autonomous control systems |
US11157441B2 (en) | 2017-07-24 | 2021-10-26 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
US10671349B2 (en) | 2017-07-24 | 2020-06-02 | Tesla, Inc. | Accelerated mathematical engine |
US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
US11215999B2 (en) | 2018-06-20 | 2022-01-04 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
US11361457B2 (en) | 2018-07-20 | 2022-06-14 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
SG11202103493QA (en) | 2018-10-11 | 2021-05-28 | Tesla Inc | Systems and methods for training machine models with augmented data |
US11196678B2 (en) | 2018-10-25 | 2021-12-07 | Tesla, Inc. | QOS manager for system on a chip communications |
US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
US11150664B2 (en) | 2019-02-01 | 2021-10-19 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
US10997461B2 (en) | 2019-02-01 | 2021-05-04 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
US10956755B2 (en) | 2019-02-19 | 2021-03-23 | Tesla, Inc. | Estimating object properties using visual image data |
CN112985427B (en) * | 2021-04-29 | 2021-07-30 | 腾讯科技(深圳)有限公司 | Lane tracking method and device for vehicle, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103472459A (en) * | 2013-08-29 | 2013-12-25 | 镇江青思网络科技有限公司 | GPS (Global Positioning System)-pseudo-range-differential-based cooperative positioning method for vehicles |
CN105270410A (en) * | 2014-07-16 | 2016-01-27 | 通用汽车环球科技运作有限责任公司 | Accurate curvature estimation algorithm for path planning of autonomous driving vehicle |
CN105628033A (en) * | 2016-02-26 | 2016-06-01 | 广西鑫朗通信技术有限公司 | Map matching method based on road connection relationship |
CN107782321A (en) * | 2017-10-10 | 2018-03-09 | 武汉迈普时空导航科技有限公司 | A kind of view-based access control model and the Combinated navigation method of high-precision map lane line constraint |
CN108459618A (en) * | 2018-03-15 | 2018-08-28 | 河南大学 | A kind of flight control system and method that unmanned plane automatically launches mobile platform |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104076382B (en) * | 2014-07-22 | 2016-11-23 | 中国石油大学(华东) | A kind of vehicle seamless positioning method based on Multi-source Information Fusion |
CN108225341B (en) * | 2016-12-14 | 2021-06-18 | 法法汽车(中国)有限公司 | Vehicle positioning method |
CN106767853B (en) * | 2016-12-30 | 2020-01-21 | 中国科学院合肥物质科学研究院 | Unmanned vehicle high-precision positioning method based on multi-information fusion |
CN107161141B (en) * | 2017-03-08 | 2023-05-23 | 深圳市速腾聚创科技有限公司 | Unmanned automobile system and automobile |
CN109556615B (en) * | 2018-10-10 | 2022-10-04 | 吉林大学 | Driving map generation method based on multi-sensor fusion cognition of automatic driving |
US10373323B1 (en) * | 2019-01-29 | 2019-08-06 | StradVision, Inc. | Method and device for merging object detection information detected by each of object detectors corresponding to each camera nearby for the purpose of collaborative driving by using V2X-enabled applications, sensor fusion via multiple vehicles |
CN110440801B (en) * | 2019-07-08 | 2021-08-13 | 浙江吉利控股集团有限公司 | Positioning perception information acquisition method, device and system |
-
2019
- 2019-11-25 CN CN201911165058.0A patent/CN110631593B/en active Active
- 2019-11-25 CN CN202010168559.0A patent/CN111307162B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103472459A (en) * | 2013-08-29 | 2013-12-25 | 镇江青思网络科技有限公司 | GPS (Global Positioning System)-pseudo-range-differential-based cooperative positioning method for vehicles |
CN105270410A (en) * | 2014-07-16 | 2016-01-27 | 通用汽车环球科技运作有限责任公司 | Accurate curvature estimation algorithm for path planning of autonomous driving vehicle |
CN105628033A (en) * | 2016-02-26 | 2016-06-01 | 广西鑫朗通信技术有限公司 | Map matching method based on road connection relationship |
CN107782321A (en) * | 2017-10-10 | 2018-03-09 | 武汉迈普时空导航科技有限公司 | A kind of view-based access control model and the Combinated navigation method of high-precision map lane line constraint |
CN108459618A (en) * | 2018-03-15 | 2018-08-28 | 河南大学 | A kind of flight control system and method that unmanned plane automatically launches mobile platform |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111752286A (en) * | 2020-03-09 | 2020-10-09 | 西南科技大学 | Automatic mooring method for small unmanned ship |
CN111752286B (en) * | 2020-03-09 | 2022-03-25 | 西南科技大学 | Automatic mooring method for small unmanned ship |
CN113494912A (en) * | 2020-03-20 | 2021-10-12 | Abb瑞士股份有限公司 | Position estimation of a vehicle based on virtual sensor responses |
CN111586632A (en) * | 2020-05-06 | 2020-08-25 | 浙江大学 | Cooperative neighbor vehicle positioning method based on communication sensing asynchronous data fusion |
CN111586632B (en) * | 2020-05-06 | 2021-09-07 | 浙江大学 | Cooperative neighbor vehicle positioning method based on communication sensing asynchronous data fusion |
CN111813127A (en) * | 2020-07-28 | 2020-10-23 | 丹阳市安悦信息技术有限公司 | Automatic automobile transfer robot system of driving formula |
CN112505718A (en) * | 2020-11-10 | 2021-03-16 | 奥特酷智能科技(南京)有限公司 | Positioning method, system and computer readable medium for autonomous vehicle |
CN112505718B (en) * | 2020-11-10 | 2022-03-01 | 奥特酷智能科技(南京)有限公司 | Positioning method, system and computer readable medium for autonomous vehicle |
CN113188539A (en) * | 2021-04-27 | 2021-07-30 | 深圳亿嘉和科技研发有限公司 | Combined positioning method of inspection robot |
CN114323033A (en) * | 2021-12-29 | 2022-04-12 | 北京百度网讯科技有限公司 | Positioning method and device based on lane lines and feature points and automatic driving vehicle |
CN114323033B (en) * | 2021-12-29 | 2023-08-29 | 北京百度网讯科技有限公司 | Positioning method and equipment based on lane lines and feature points and automatic driving vehicle |
CN116222588A (en) * | 2023-05-08 | 2023-06-06 | 睿羿科技(山东)有限公司 | Positioning method for integrating GPS and vehicle-mounted odometer |
CN116222588B (en) * | 2023-05-08 | 2023-08-04 | 睿羿科技(山东)有限公司 | Positioning method for integrating GPS and vehicle-mounted odometer |
Also Published As
Publication number | Publication date |
---|---|
CN111307162B (en) | 2020-09-25 |
CN111307162A (en) | 2020-06-19 |
CN110631593B (en) | 2020-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110631593B (en) | Multi-sensor fusion positioning method for automatic driving scene | |
CN110160542B (en) | Method and device for positioning lane line, storage medium and electronic device | |
CN110307836B (en) | Accurate positioning method for welt cleaning of unmanned cleaning vehicle | |
JP5162849B2 (en) | Fixed point position recorder | |
CN108885106A (en) | It is controlled using the vehicle part of map | |
CN110208842A (en) | Vehicle high-precision locating method under a kind of car networking environment | |
US9618344B2 (en) | Digital map tracking apparatus and methods | |
CN108873038A (en) | Autonomous parking localization method and positioning system | |
CN110361008B (en) | Positioning method and device for automatic parking of underground garage | |
CN107247275B (en) | Urban GNSS vulnerability monitoring system and method based on bus | |
CN105675006B (en) | A kind of route deviation detection method | |
CN112904395B (en) | Mining vehicle positioning system and method | |
CN110057356B (en) | Method and device for positioning vehicles in tunnel | |
CN109696177B (en) | Device for compensating gyro sensing value, system having the same and method thereof | |
WO2013149149A1 (en) | Method to identify driven lane on map and improve vehicle position estimate | |
JP4596566B2 (en) | Self-vehicle information recognition device and self-vehicle information recognition method | |
CN113220013A (en) | Multi-rotor unmanned aerial vehicle tunnel hovering method and system | |
CN110940344B (en) | Low-cost sensor combination positioning method for automatic driving | |
US10895460B2 (en) | System and method for generating precise road lane map data | |
CN111426320A (en) | Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter | |
CN110018503B (en) | Vehicle positioning method and positioning system | |
CN114323003A (en) | Underground mine fusion positioning method based on UMB, IMU and laser radar | |
CN112525207B (en) | Unmanned vehicle positioning method based on vehicle pitch angle map matching | |
CN104535083A (en) | Distribution method of inertial-navigation positional accuracy testing ground | |
CN115542277B (en) | Radar normal calibration method, device, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder | ||
CP02 | Change in the address of a patent holder |
Address after: 210012 room 401-404, building 5, chuqiaocheng, No. 57, Andemen street, Yuhuatai District, Nanjing, Jiangsu Province Patentee after: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd. Address before: 211800 building 12-289, 29 buyue Road, Qiaolin street, Jiangbei new district, Pukou District, Nanjing City, Jiangsu Province Patentee before: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd. |