CN106840179B  Intelligent vehicle positioning method based on multisensor information fusion  Google Patents
Intelligent vehicle positioning method based on multisensor information fusion Download PDFInfo
 Publication number
 CN106840179B CN106840179B CN201710131878.2A CN201710131878A CN106840179B CN 106840179 B CN106840179 B CN 106840179B CN 201710131878 A CN201710131878 A CN 201710131878A CN 106840179 B CN106840179 B CN 106840179B
 Authority
 CN
 China
 Prior art keywords
 intelligent vehicle
 coordinate
 information
 moment
 pose
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
 230000004927 fusion Effects 0.000 title claims abstract description 22
 238000006243 chemical reactions Methods 0.000 claims abstract description 15
 230000001360 synchronised Effects 0.000 claims abstract description 9
 238000010276 construction Methods 0.000 claims abstract description 7
 239000002245 particle Substances 0.000 claims description 33
 238000000034 method Methods 0.000 claims description 27
 238000005070 sampling Methods 0.000 claims description 27
 239000011159 matrix material Substances 0.000 claims description 24
 238000005259 measurement Methods 0.000 claims description 15
 238000001914 filtration Methods 0.000 claims description 11
 238000005516 engineering process Methods 0.000 claims description 5
 238000005096 rolling process Methods 0.000 claims description 3
 230000026676 system process Effects 0.000 claims description 3
 238000010586 diagram Methods 0.000 description 2
 238000005457 optimization Methods 0.000 description 2
 238000004364 calculation method Methods 0.000 description 1
 230000000694 effects Effects 0.000 description 1
 238000007500 overflow downdraw method Methods 0.000 description 1
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00
 G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 specially adapted for navigation in a road network
 G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
 G01C21/30—Map or contourmatching
 G01C21/32—Structuring or formatting of map data

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00
 G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 specially adapted for navigation in a road network
 G01C21/34—Route searching; Route guidance
 G01C21/3407—Route searching; Route guidance specially adapted for specific applications
 G01C21/3415—Dynamic rerouting, e.g. recalculating the route when the user deviates from calculated route or after detecting realtime traffic data or accidents

 G—PHYSICS
 G01—MEASURING; TESTING
 G01S—RADIO DIRECTIONFINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCEDETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
 G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
 G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
 G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting timestamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
 G01S19/42—Determining position
 G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
Abstract
the invention discloses an intelligent vehicle positioning method based on multisensor information fusion, which can effectively avoid the drift phenomenon caused by the pure dependence on GPS positioning, thereby improving the positioning accuracy of a vehicle. The method comprises the steps of processing radar data and encoder data, obtaining the position of a vehicle in a local map by a synchronous positioning and map construction method (SLAM), obtaining the position of the vehicle in a latitude and longitude system by GPS positioning, unifying the two positions in a geodetic coordinate system by coordinate conversion, and fusing the two position information by using a Kalman filter to obtain the accurate estimation of the position of the vehicle. The invention realizes accurate global positioning of the vehicle based on the vehiclemounted encoder, the laser radar and the GPS.
Description
Technical Field
The invention relates to the field of intelligent vehicle positioning methods, in particular to an intelligent vehicle positioning method based on multisensor information fusion.
background
one of the prerequisites of the intelligent vehicle for autonomous driving or auxiliary driving is to accurately know the position of the intelligent vehicle. In the field of autonomous driving or assisted driving of current intelligent vehicles, the most used positioning method is based on GPS positioning or GPS and odometer fusion for positioning. The GPS equipment is used for positioning independently, the positioning often has a drifting phenomenon due to the error of the equipment, and meanwhile, in some occasions such as tunnels and urban areas with high buildings, GPS signals can be weakened or even lost rapidly. GPS and odometer fuse the location, can more tend to use the odometer to calculate the location when GPS signal is weak, and then reduce and lean on the error that GPS location arouses, but because the odometer has accumulative error and drift in the vehicle driving process, the odometer calculation location is also inaccurate.
the method uses synchronous positioning and map construction technology, uses radar data to correct mileage data to obtain the local position estimation of the intelligent vehicle, combines GPS positioning, and fuses coordinate conversion and Kalman filtering to obtain accurate position estimation of the intelligent vehicle. Compared with the fusion positioning of the GPS and the odometer, the method can better complete the positioning of the intelligent vehicle and provide guarantee for the automatic driving or the auxiliary driving of the intelligent vehicle.
Disclosure of Invention
the invention aims to provide an intelligent vehicle positioning method based on multisensor information fusion, and aims to solve the problem that GPS positioning is not accurate in the field of autonomous driving or assisted driving of intelligent vehicles in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the utility model provides an intelligent car positioning method based on multisensor information fusion which characterized in that: the method comprises the following steps:
(1) The method comprises the steps that data collected by a photoelectric encoder and a laser radar based on an intelligent vehicle are obtained, and position information of the intelligent vehicle in a local map constructed by a synchronous positioning and map construction method (SLAM) is obtained;
(2) Acquiring the position information of the intelligent vehicle in a geographic coordinate system through the GPS positioning of the intelligent vehicle;
(3) Unifying the position information of the intelligent vehicle obtained in the step (1) and the step (2) in a local map coordinate system and a geographic coordinate system in a geodetic coordinate system through coordinate conversion;
(4) And (3) fusing the position information of the intelligent vehicle obtained in the step (1) and the position information of the intelligent vehicle obtained in the step (2) by adopting a Kalman filtering technology based on the coordinate conversion process in the step (3) to obtain accurate position estimation of the intelligent vehicle.
the intelligent vehicle positioning method based on multisensor information fusion is characterized in that: the specific process of the step (1) is as follows:
(1.1) with the center of the intelligent vehicle body as the origin O when the intelligent vehicle is started_{o}X in the forward direction of the vehicle_{o}Axial direction, perpendicular to X_{o}The left side of the axle pointing to the vehicle body is Y_{o}axial direction, Z_{o}The axes are oriented vertically to establish the odometer coordinate system. Because only the motion of the intelligent vehicle in the horizontal plane is considered and no pitching and rolling actions are taken, the Zaxis coordinate is always 0 and only the change of the yaw angle is taken, and therefore the pose model (x, y and theta) of the intelligent vehicle can be obtained. Through the photoelectric encoder of the left and right wheels of the rear wheel of the intelligent vehicle, the intelligent vehicle odometer information is calculated, and the specific process is as follows:
(1.1.1) and 0 moment, and the position and posture (x) of the intelligent vehicle in the coordinate system of the odometer_{0},y_{0},θ_{0}) Is (0,0,0).
(1.1.2) and within unit sampling time, obtaining the increment Delta S of the wheel position under the coordinate system of the odometer by the following formula:
where, Delta Q is the pulse increment output by the photoelectric code disc, D is the wheel diameter, and T_{s}The sampling time is N, the total number of the gratings on the photoelectric code disc is N, and the deceleration rate of the photoelectric encoder is K;
(1.1.3), the distance between the left wheel and the right wheel is omega, and the increment of the position of the left wheel and the position of the right wheel in unit sampling time is respectively delta S_{L}and Δ S_{R}Then the automobile starts from the time point k1S_{k1}＝(x_{k1},y_{k1},θ_{k1}) To time k S_{k}＝(x_{k},y_{k},θ_{k}) Change in pose of
x_{k}＝x_{k1}+ΔD_{k} cos(θ_{k1}+Δθ_{k})
y_{k}＝y_{k1}+ΔD_{k} sin(θ_{k1}+Δθ_{k})
θ_{k}＝θ_{k1}+Δθ_{k}
(1.1.4) accumulating and obtaining the odometer information of the intelligent vehicle at the moment k by using the method;
(1.2) acquiring and recording observation information of the surrounding environment of the intelligent vehicle at the moment k through a vehiclemounted laser radar;
and (1.3) establishing a map coordinate system which is coaxial with the odometer coordinate system and the origin, and carrying out composition and positioning by using a SLAM algorithm under the map coordinate system. And (3) obtaining optimized pose of the intelligent vehicle under a map coordinate system by using the SLAM algorithm based on a particle filter through using the odometer information obtained in the step (1.1) as control input information of the SLAM algorithm and the radar scanning information obtained in the step (1.2) as observation information of the SLAM algorithm, and simultaneously, establishing a map according to the optimized pose and the obstacle information around the intelligent vehicle at the moment k.
The intelligent vehicle positioning method based on multisensor information fusion is characterized in that: in the step (1.3), when the particle filter is used for optimization to obtain a more optimal pose and a map of the intelligent vehicle, the pose of the intelligent vehicle at the current moment is calculated according to the pose of the intelligent vehicle at the previous moment. Knowing the pose x of the intelligent vehicle at the moment k1_{k1}Odometer information u, which is control information at time k_{k}observation information z at time k_{k}estimating x of the current pose of the smart car_{k}a posterior probability of (d);
In the particle filter, a sequential importance sampling algorithm and a sampling importance resampling algorithm are adopted to obtain posterior probability distribution, the posterior probability distribution is approximately obtained by weighting the sampling particles of the reference distribution,
The method specifically comprises the following steps:
(1.3.1) initializing a system, setting N particles, wherein each particle represents the current pose of the intelligent vehicle, and samplingcalculating weights
(1.3.2) sampling importance, and at the moment k, according to the control information u of the intelligent vehicle at the moment k_{k}calculating the position and attitude distribution of the intelligent vehicle at the moment k, and sampling each particle
(1.3.3) observation information z from time k_{k}and the weight of the particle at the previous moment, and calculating the weight of the particle at the k momentImportance weightNormalizing the weight;
(1.3.4), resampling, increasing the particles with high importance weight, deleting the particles with low importance weight, and calculating the posterior probability.
the intelligent vehicle positioning method based on multisensor information fusion is characterized in that: in the step (2), the original latitude, longitude and altitude coordinates of the intelligent vehicle in the WGS84 geographic coordinate system are obtained through the GPS positioning of the intelligent vehicle.
The intelligent vehicle positioning method based on multisensor information fusion is characterized in that: the coordinate conversion unification process in the step (3) is as follows:
(3.1) converting the original latitude and longitude of the intelligent vehicle in a WGS84 coordinate system into a plane coordinate system by adopting a general Gaussian projection, converting the original latitude and longitude into a 'northeast sky' geodetic coordinate system taking the vehicle body center of the initial position of the intelligent vehicle as an origin, and taking the coordinate system as a main coordinate system;
(3.2) converting a map coordinate system in the SLAM algorithm of the intelligent vehicle into a geodetic coordinate system, wherein the specific conversion method comprises the following steps: when the intelligent vehicle is started, the longitude, latitude and course angle of the intelligent vehicle at the starting moment are obtained through the GPS equipmentObtaining the coordinate (x) of the intelligent vehicle in the geodetic coordinate system at the moment through the method in the step (3.1)_{0},y_{0}) Then point (x) in the map coordinate system_{m},y_{m}) Coordinates (x) in the geodetic coordinate system_{p},y_{p}) Comprises the following steps:
The intelligent vehicle positioning method based on multisensor information fusion is characterized in that: in the Kalman filtering model in the step (4), the pose of the intelligent vehicle in the geodetic coordinate system obtained in the step (1) through the SLAM algorithm is used as a predicted value; and (2) using the pose of the intelligent vehicle under the geodetic coordinate system obtained by the GPS as an observed value.
The observation equations defining the state space transition equations and the state space are as follows:
X(k)＝AX(k1)+W(k)，
Z(k)＝HX(k)+V(k)，
Wherein X (k) is a pose state vector at the moment k, A is a process matrix, H is a measurement matrix, W (k) is a process noise matrix, the covariance of the process noise matrix is Q, V (k) is measurement noise, and the covariance of the process noise matrix is R. The process of kalman filtering is as follows:
The prediction equation set is:
X(kk1)＝AX(k1k1)，
P(kk1)＝AP(k1k1)A’+Q，
updating the equation set:
X(kk)＝X(kk1)+Kg(k)(Z(k)HX(kk1))，
P(kk)＝(1Kg(k)H)P(kk1)，
Wherein:
X (k  k1) is an estimated pose state vector at the moment k;
X (k1 k1) is the optimal pose state at the moment of k1;
Kg (k) is the Kalman gain at time k;
x (k  k) is the optimal pose state estimate at time k;
q is the covariance of the system process noise W (k);
r is the covariance of the system measurement noise V (k).
the intelligent vehicle positioning method based on multisensor information fusion is characterized in that: the values of the state noise matrix W (k) and the measurement noise matrix V (k) are both Gaussian white noise, wherein the measurement noise matrix V (k) is adaptively adjusted according to the positioning quality of the GPS equipment.
the method uses synchronous positioning and map construction technology, uses radar data to correct mileage data to obtain local optimized position estimation of the intelligent vehicle, combines GPS positioning, and fuses coordinate conversion and Kalman filtering to obtain accurate position estimation of the intelligent vehicle. Compared with the fusion positioning of the GPS and the odometer, the method can better complete the positioning of the intelligent vehicle and provide guarantee for the automatic driving or the auxiliary driving of the intelligent vehicle.
compared with the prior art, the invention has the beneficial effects that:
Compared with the existing intelligent vehicle positioning method, the intelligent vehicle positioning method based on laser radar, encoder and GPS multiinformation fusion uses an advanced synchronous positioning and construction method and an information fusion method under the condition of using more sensor laser radars, effectively improves the positioning precision of the intelligent vehicle, and provides guarantee for automatic driving or auxiliary driving of the intelligent vehicle.
Drawings
FIG. 1 is a flow chart of an intelligent vehicle positioning method based on multisensor information fusion in accordance with the present invention.
Fig. 2 is a block diagram of a synchronized positioning and mapping method used in the present invention.
Fig. 3 is a diagram of the smart car sensor architecture of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
as shown in fig. 1, an intelligent vehicle positioning method based on multisensor information fusion includes the following steps:
(1) The method comprises the steps that data collected by a photoelectric encoder and a laser radar based on an intelligent vehicle are obtained, and position information of the intelligent vehicle in a local map constructed by a synchronous positioning and map construction method (SLAM) is obtained;
(2) Acquiring the position information of the intelligent vehicle in a geographic coordinate system through the GPS positioning of the intelligent vehicle;
(3) Unifying the position information of the intelligent vehicle obtained in the step (1) and the step (2) in a local map coordinate system and a geographic coordinate system in a geodetic coordinate system through coordinate conversion;
(4) And (3) fusing the position information of the intelligent vehicle obtained in the step (1) and the position information of the intelligent vehicle obtained in the step (2) by adopting a Kalman filtering technology based on the coordinate conversion process in the step (3) to obtain accurate position estimation of the intelligent vehicle.
for the synchronous positioning and map building method described in step 1, the specific flow is shown in fig. 2, and meanwhile, according to fig. 3, the laser radar is installed in the center of the head of the smart car, the encoder is installed on the axle of the left and right wheels of the rear wheel, and the algorithm specifically includes the following steps:
(1.1) with the center of the intelligent vehicle body as the origin O when the intelligent vehicle is started_{o}x in the forward direction of the vehicle_{o}Axial direction, perpendicular to X_{o}the left side of the axle pointing to the vehicle body is Y_{o}Axial direction, Z_{o}The axes are oriented vertically to establish the odometer coordinate system. Because only the motion of the intelligent vehicle in the horizontal plane is considered and no pitching and rolling actions are taken, the Zaxis coordinate is always 0 and only the change of the yaw angle is taken, and therefore the pose model (x, y and theta) of the intelligent vehicle can be obtained. Through the photoelectric encoder of the left and right wheels of the rear wheel of the intelligent vehicle, the intelligent vehicle odometer information is calculated, and the specific process is as follows:
(1.1.1) and 0 moment, and the position and posture (x) of the intelligent vehicle in the coordinate system of the odometer_{0},y_{0},θ_{0}) Is (0,0,0).
(1.1.2) and within unit sampling time, obtaining the increment Delta S of the wheel position under the coordinate system of the odometer by the following formula:
Where, Delta Q is the pulse increment output by the photoelectric code disc, D is the wheel diameter, and T_{s}the sampling time is N, the total number of the gratings on the photoelectric code disc is N, and the deceleration rate of the photoelectric encoder is K;
(1.1.3), the distance between the left wheel and the right wheel is omega, and the increment of the position of the left wheel and the position of the right wheel in unit sampling time is respectively delta S_{L}And Δ S_{R}then the car starts from the moment k1S_{k1}＝(x_{k1},y_{k1},θ_{k1}) To time k S_{k}＝(x_{k},y_{k},θ_{k}) Change in pose of
x_{k}＝x_{k1}+ΔD_{k} cos(θ_{k1}+Δθ_{k})
y_{k}＝y_{k1}+ΔD_{k} sin(θ_{k1}+Δθ_{k})
θ_{k}＝θ_{k1}+Δθ_{k}
(1.1.4) accumulating and obtaining the odometer information of the intelligent vehicle at the moment k by using the method;
(1.2) acquiring and recording observation information of the surrounding environment of the intelligent vehicle at the moment k through a vehiclemounted laser radar;
And (1.3) establishing a map coordinate system which is coaxial with the odometer coordinate system and the origin, and carrying out composition and positioning by using a SLAM algorithm under the map coordinate system. And (3) obtaining optimized pose of the intelligent vehicle under a map coordinate system by using the SLAM algorithm based on a particle filter through using the odometer information obtained in the step (1.1) as control input information of the SLAM algorithm and the radar scanning information obtained in the step (1.2) as observation information of the SLAM algorithm, and simultaneously, establishing a map according to the optimized pose and the obstacle information around the intelligent vehicle at the moment k.
In the step (1.3), when the particle filter is used for optimization to obtain a more optimal pose and a map of the intelligent vehicle, the pose of the intelligent vehicle at the current moment is calculated according to the pose of the intelligent vehicle at the previous moment. Knowing the pose x of the intelligent vehicle at the moment k1_{k1}odometer information u, which is control information at time k_{k}Observation information z at time k_{k}Estimating x of the current pose of the smart car_{k}a posterior probability of (d);
in the particle filter, a sequential importance sampling algorithm and a sampling importance resampling algorithm are adopted to obtain posterior probability distribution, the posterior probability distribution is approximately obtained by weighting the sampling particles of the reference distribution,
The method specifically comprises the following steps:
(1.3.1) initializing a system, setting N particles, wherein each particle represents the current pose of the intelligent vehicle, and samplingCalculating weights
(1.3.2) sampling importance, and at the moment k, according to the control information u of the intelligent vehicle at the moment k_{k}calculating the position and attitude distribution of the intelligent vehicle at the moment k, and sampling each particle
(1.3.3) observation information z from time k_{k}and the weight of the previous moment of the particle, and calculating the importance weight of the particle at the moment kNormalizing the weight;
(1.3.4), resampling, increasing the particles with high importance weight, deleting the particles with low importance weight, and calculating the posterior probability.
in the step (2), the original latitude, longitude and altitude coordinates of the intelligent vehicle in the WGS84 geographic coordinate system are obtained through the GPS positioning of the intelligent vehicle.
The coordinate conversion unification process in the step (3) is as follows:
(3.1) converting the original latitude and longitude of the intelligent vehicle in a WGS84 coordinate system into a plane coordinate system by adopting a general Gaussian projection, converting the original latitude and longitude into a 'northeast sky' geodetic coordinate system taking the vehicle body center of the initial position of the intelligent vehicle as an origin, and taking the coordinate system as a main coordinate system;
(3.2) in the SLAM algorithm of the intelligent vehiclethe map coordinate system of (2) is converted into a geodetic coordinate system by the following specific conversion method: when the intelligent vehicle is started, the longitude, latitude and course angle of the intelligent vehicle at the starting moment are obtained through the GPS equipmentObtaining the coordinate (x) of the intelligent vehicle in the geodetic coordinate system at the moment through the method in the step (3.1)_{0}，y_{0}) Then point (x) in the map coordinate system_{m}，y_{m}) Coordinates (x) in the geodetic coordinate system_{p}，y_{p}) Comprises the following steps:
in the Kalman filtering model in the step (4), the pose of the intelligent vehicle in the geodetic coordinate system obtained in the step (1) through the SLAM algorithm is used as a predicted value; and (2) using the pose of the intelligent vehicle under the geodetic coordinate system obtained by the GPS as an observed value.
The observation equations defining the state space transition equations and the state space are as follows:
X(k)＝AX(k1)+W(k)，
Z(k)＝HX(k)+V(k)，
wherein X (k) is a pose state vector at the moment k, A is a process matrix, H is a measurement matrix, W (k) is a process noise matrix, the covariance of the process noise matrix is Q, V (k) is measurement noise, and the covariance of the process noise matrix is R. The process of kalman filtering is as follows:
the prediction equation set is:
X(kk1)＝AX(k1k1)，
P(kk1)＝AP(k1k1)A’+Q，
updating the equation set:
X(kk)＝X(kk1)+Kg(k)(Z(k)HX(kk1))，
P(kk)＝(1Kg(k)H)P(kk1)，
wherein:
x (k  k1) is an estimated pose state vector at the moment k;
X (k1 k1) is the optimal pose state at the moment of k1;
kg (k) is the Kalman gain at time k;
x (k  k) is the optimal pose state estimate at time k;
Q is the covariance of the system process noise W (k);
R is the covariance of the system measurement noise V (k).
The values of the state noise matrix W (k) and the measurement noise matrix V (k) are both Gaussian white noise, wherein the measurement noise matrix V (k) is adaptively adjusted according to the positioning quality of the GPS equipment.
Claims (4)
1. The utility model provides an intelligent car positioning method based on multisensor information fusion which characterized in that: the method comprises the following steps:
(1) the method comprises the steps that data collected by a photoelectric encoder and a laser radar based on an intelligent vehicle are utilized, and the position information of the intelligent vehicle in a local map constructed by a synchronous positioning and map construction method SLAM is obtained;
The specific process of the step (1) is as follows:
(1.1) with the center of the intelligent vehicle body as the origin O when the intelligent vehicle is started_{o}x in the forward direction of the vehicle_{o}Axial direction, perpendicular to X_{o}The left side of the axle pointing to the vehicle body is Y_{o}axial direction, Z_{o}Establishing a coordinate system of the odometer in the axial vertical direction; only the motion of the intelligent vehicle in the horizontal plane is considered, and no pitching and rolling actions are generated, so that the Zaxis coordinate is always 0 and only the change of the yaw angle is generated, and the pose model (x, y and theta) of the intelligent vehicle can be obtained; through the photoelectric encoder of the left and right wheels of the rear wheel of the intelligent vehicle, the intelligent vehicle odometer information is calculated, and the specific process is as follows:
(1.1.1) and 0 moment, and the position and posture (x) of the intelligent vehicle in the coordinate system of the odometer_{0},y_{0},θ_{0}) Is (0,0, 0);
(1.1.2) unit sampling time, inThe increment Δ S of the wheel position in the odometer coordinate system is obtained by the following formula:
where, Delta Q is the pulse increment output by the photoelectric code disc, D is the wheel diameter, and T_{s}The sampling time is N, the total number of the gratings on the photoelectric code disc is N, and the deceleration rate of the photoelectric encoder is K;
(1.1.3), the distance between the left wheel and the right wheel is omega, and the increment of the position of the left wheel and the position of the right wheel in unit sampling time is respectively delta S_{L}And Δ S_{R}then the automobile starts from the time point k1S_{k1}＝(x_{k1},y_{k1},θ_{k1}) To time k S_{k}＝(x_{k},y_{k},θ_{k}) The pose change of (1) is:
x_{k}＝x_{k1}+ΔD_{k} cos(θ_{k1}+Δθ_{k})，
y_{k}＝y_{k1}+ΔD_{k}sin(θ_{k1}+Δθ_{k})，
θ_{k}＝θ_{k1}+Δθ_{k}，
(1.1.4) accumulating and obtaining the odometer information of the intelligent vehicle at the moment k by using the method;
(1.2) acquiring and recording observation information of the surrounding environment of the intelligent vehicle at the moment k through a vehiclemounted laser radar;
(1.3) establishing a local map coordinate system which is coaxial with the milemeter coordinate system and the origin, and carrying out composition and positioning by using an SLAM algorithm under the local map coordinate system; the odometer information obtained in the step (1.1) is used as control input information of a SLAM algorithm, the radar scanning information obtained in the step (1.2) is used as observation information of the SLAM algorithm, an optimized pose of the intelligent vehicle under a map coordinate system is obtained by using the SLAM algorithm based on a particle filter, and meanwhile, the obstacle information around the intelligent vehicle at the moment k is added into a local map according to the optimized pose;
When the particle filter is used for optimizing to obtain a more optimal pose and a map of the intelligent vehicle, the pose of the intelligent vehicle at the current moment is calculated according to the pose at the previous moment; knowing the pose x of the intelligent vehicle at the moment k1_{k1}odometer information u, which is control information at time k_{k}Observation information z at time k_{k}Estimating x of the current pose of the smart car_{k}a posterior probability of (d);
in the particle filter, a sequential importance sampling algorithm and a sampling importance resampling algorithm are adopted to obtain posterior probability distribution, the posterior probability distribution is approximately obtained by weighting the sampling particles of the reference distribution,
The method specifically comprises the following steps:
(1.3.1) initializing a system, setting N particles, wherein each particle represents the current pose of the intelligent vehicle, and samplingcalculating weights
(1.3.2) sampling importance, and at the moment k, according to the control information u of the intelligent vehicle at the moment k_{k}calculating the position and attitude distribution of the intelligent vehicle at the moment k, and sampling each particle
(1.3.3) observation information z from time k_{k}and the weight of the previous moment of the particle, and calculating the importance weight of the particle at the moment kNormalizing the weight;
(1.3.4) resampling, increasing particles with high importance weight, deleting particles with low importance weight, and calculating posterior probability;
(2) Acquiring the position information of the intelligent vehicle in a WGS84 coordinate system through the GPS positioning of the intelligent vehicle;
(3) Unifying the position information of the intelligent vehicle obtained in the step (1) and the step (2) in a local map coordinate system and a WGS84 coordinate system in a geodetic coordinate system through coordinate conversion;
(4) Fusing the position information of the intelligent vehicle obtained in the step (1) and the position information of the intelligent vehicle obtained in the step (2) by adopting a Kalman filtering technology based on the coordinate conversion process in the step (3) to obtain accurate position estimation of the intelligent vehicle;
In the Kalman filtering model, the pose of the intelligent vehicle in the geodetic coordinate system obtained by the SLAM algorithm in the step (1) is used as a predicted value; step (2) obtaining the pose of the intelligent vehicle in the geodetic coordinate system through a GPS as an observed value;
the observation equations defining the state space transition equations and the state space are as follows:
X(k)＝AX(k1)+W(k)，
Z(k)＝HX(k)+V(k)，
wherein X (k) is a pose state vector at the moment k, A is a process matrix, H is a measurement matrix, W (k) is a process noise matrix, the covariance of the process noise matrix is Q, V (k) is measurement noise, and the covariance of the process noise matrix is R; the process of kalman filtering is as follows:
the prediction equation set is:
X(kk1)＝AX(k1k1)，
P(kk1)＝AP(k1k1)A′+Q，
updating the equation set:
X(kk)＝X(kk1)+Kg(k)(Z(k)HX(kk1))，
P(kk)＝(1Kg(k)H)P(kk1)，
wherein:
X (k  k1) is an estimated pose state vector at the moment k;
X (k1 k1) is the optimal pose state at the moment of k1;
Kg (k) is the Kalman gain at time k;
X (k  k) is the optimal pose state estimate at time k;
Q is the covariance of the system process noise W (k);
R is the covariance of the system measurement noise V (k).
2. the intelligent vehicle positioning method based on multisensor information fusion according to claim 1, characterized in that: in the step (2), the original latitude, longitude and altitude coordinates of the intelligent vehicle in the WGS84 geographic coordinate system are obtained through the GPS positioning of the intelligent vehicle.
3. The intelligent vehicle positioning method based on multisensor information fusion according to claim 1, characterized in that: the coordinate conversion unification process in the step (3) is as follows:
(3.1) converting the original latitude and longitude of the intelligent vehicle in a WGS84 coordinate system into a plane coordinate system by adopting a general Gaussian projection, converting the original latitude and longitude into a geodetic coordinate system taking the vehicle body center of the initial position of the intelligent vehicle as an origin, and taking the coordinate system as a main coordinate system;
(3.2) converting a map coordinate system in the SLAM algorithm of the intelligent vehicle into a geodetic coordinate system, wherein the specific conversion method comprises the following steps: when the intelligent vehicle is started, the longitude, latitude and course angle of the intelligent vehicle at the starting moment are obtained through the GPS equipmentObtaining the coordinate (x) of the intelligent vehicle in the geodetic coordinate system at the moment through the method in the step (3.1)_{0},y_{0}) Then point (x) in the map coordinate system_{m},y_{m}) Coordinates (x) in the geodetic coordinate system_{p},y_{p}) Comprises the following steps:
4. the intelligent vehicle positioning method based on multisensor information fusion according to claim 1, characterized in that: the values of the state noise matrix W (k) and the measurement noise matrix V (k) are both Gaussian white noise, wherein the measurement noise matrix V (k) is adaptively adjusted according to the positioning quality of the GPS equipment.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201710131878.2A CN106840179B (en)  20170307  20170307  Intelligent vehicle positioning method based on multisensor information fusion 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201710131878.2A CN106840179B (en)  20170307  20170307  Intelligent vehicle positioning method based on multisensor information fusion 
Publications (2)
Publication Number  Publication Date 

CN106840179A CN106840179A (en)  20170613 
CN106840179B true CN106840179B (en)  20191210 
Family
ID=59137497
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201710131878.2A Active CN106840179B (en)  20170307  20170307  Intelligent vehicle positioning method based on multisensor information fusion 
Country Status (1)
Country  Link 

CN (1)  CN106840179B (en) 
Families Citing this family (16)
Publication number  Priority date  Publication date  Assignee  Title 

CN107478214A (en) *  20170724  20171215  杨华军  A kind of indoor orientation method and system based on Multisensor Fusion 
CN107577646A (en) *  20170823  20180112  上海莫斐信息技术有限公司  A kind of highprecision track operation method and system 
CN108168560A (en) *  20171227  20180615  沈阳智远弘业机器人有限公司  A kind of complex navigation control method for omnidirectional AGV 
CN108332750A (en) *  20180105  20180727  深圳市功夫机器人有限公司  Robot localization method and terminal device 
CN108398705A (en) *  20180306  20180814  广州小马智行科技有限公司  Ground drawing generating method, device and vehicle positioning method, device 
CN110160545B (en) *  20180315  20200505  北京航空航天大学  Enhanced positioning system and method for laser radar and GPS 
CN108759833A (en) *  20180425  20181106  中国科学院合肥物质科学研究院  A kind of intelligent vehicle localization method based on priori map 
CN109031373B (en) *  20180608  20201218  北京航天光华电子技术有限公司  Navigation system and method for transformer substation inspection robot 
CN109061703A (en)  20180611  20181221  百度在线网络技术（北京）有限公司  Method, apparatus, equipment and computer readable storage medium used for positioning 
CN109059906B (en) *  20180626  20200929  上海西井信息科技有限公司  Vehicle positioning method and device, electronic equipment and storage medium 
CN108958266A (en) *  20180809  20181207  北京智行者科技有限公司  A kind of map datum acquisition methods 
CN109270545B (en) *  20181023  20200811  百度在线网络技术（北京）有限公司  Positioning true value verification method, device, equipment and storage medium 
CN109671120A (en) *  20181108  20190423  南京华捷艾米软件科技有限公司  A kind of monocular SLAM initial method and system based on wheel type encoder 
CN109733384B (en) *  20181225  20210430  科大讯飞股份有限公司  Parking path setting method and system 
CN109859613A (en) *  20190118  20190607  驭势（上海）汽车科技有限公司  A kind of highprecision cartography method and mobile unit based on projected coordinate system 
CN110187375A (en) *  20190627  20190830  武汉中海庭数据技术有限公司  A kind of method and device improving positioning accuracy based on SLAM positioning result 
Family Cites Families (3)
Publication number  Priority date  Publication date  Assignee  Title 

CN101846734B (en) *  20090326  20120822  中国农业大学  Agricultural machinery navigation and position method and system and agricultural machinery industrial personal computer 
CN101625572B (en) *  20090810  20110511  浙江大学  FastSLAM algorithm based on improved resampling method and particle selection 
CN105973145B (en) *  20160519  20210205  深圳市速腾聚创科技有限公司  Mobile threedimensional laser scanning system and mobile threedimensional laser scanning method 

2017
 20170307 CN CN201710131878.2A patent/CN106840179B/en active Active
Also Published As
Publication number  Publication date 

CN106840179A (en)  20170613 
Similar Documents
Publication  Publication Date  Title 

CN106908775B (en)  A kind of unmanned vehicle realtime location method based on laser reflection intensity  
Vivacqua et al.  Selflocalization based on visual lane marking maps: An accurate lowcost approach for autonomous driving  
CN105206108B (en)  A kind of vehicle collision prewarning method based on electronic map  
CN104121905B (en)  Course angle obtaining method based on inertial sensor  
Gu et al.  GNSS/onboard inertial sensor integration with the aid of 3D building map for lanelevel vehicle selflocalization in urban canyon  
Rose et al.  An integrated vehicle navigation system utilizing lanedetection and lateral position estimation systems in difficult environments for GPS  
CN107246876B (en)  Method and system for autonomous positioning and map construction of unmanned automobile  
JP6432116B2 (en)  Vehicle position specifying device, vehicle control system, vehicle position specifying method, and vehicle position specifying program  
Georgy et al.  Enhanced MEMSIMU/odometer/GPS integration using mixture particle filter  
CN101476894B (en)  Vehiclemounted SINS/GPS combined navigation system performance reinforcement method  
US7522091B2 (en)  Road curvature estimation system  
US8452535B2 (en)  Systems and methods for precise sublane vehicle positioning  
Rauch et al.  Car2xbased perception in a highlevel fusion architecture for cooperative perception systems  
Bonnifait et al.  Data fusion of four ABS sensors and GPS for an enhanced localization of carlike vehicles  
Bevly et al.  The use of GPS based velocity measurements for measurement of sideslip and wheel slip  
CN105588563A (en)  Joint calibration method of binocular camera and inertial navigation unit in automatic driving  
CN102928816B (en)  Highreliably integrated positioning method for vehicles in tunnel environment  
Huang et al.  A loworder DGPSbased vehicle positioning system under urban environment  
CN108073170B (en)  Automated collaborative driving control for autonomous vehicles  
US7096116B2 (en)  Vehicle behavior detector, invehicle processing system, detection information calibrator, and invehicle processor  
CN1090314C (en)  Movement detector  
US8510044B2 (en)  Position sensing device and method  
CN104764457A (en)  Urban environment composition method for unmanned vehicles  
CN104833360B (en)  A kind of conversion method of twodimensional coordinate to threedimensional coordinate  
CN104635233B (en)  Objects in front state estimation and sorting technique based on vehiclemounted millimeter wave radar 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
GR01  Patent grant  
GR01  Patent grant 