CN106840179B - Intelligent vehicle positioning method based on multi-sensor information fusion - Google Patents

Intelligent vehicle positioning method based on multi-sensor information fusion Download PDF

Info

Publication number
CN106840179B
CN106840179B CN201710131878.2A CN201710131878A CN106840179B CN 106840179 B CN106840179 B CN 106840179B CN 201710131878 A CN201710131878 A CN 201710131878A CN 106840179 B CN106840179 B CN 106840179B
Authority
CN
China
Prior art keywords
intelligent vehicle
coordinate
information
moment
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710131878.2A
Other languages
Chinese (zh)
Other versions
CN106840179A (en
Inventor
祝辉
邓炯
余彪
梁华为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN201710131878.2A priority Critical patent/CN106840179B/en
Publication of CN106840179A publication Critical patent/CN106840179A/en
Application granted granted Critical
Publication of CN106840179B publication Critical patent/CN106840179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement

Abstract

the invention discloses an intelligent vehicle positioning method based on multi-sensor information fusion, which can effectively avoid the drift phenomenon caused by the pure dependence on GPS positioning, thereby improving the positioning accuracy of a vehicle. The method comprises the steps of processing radar data and encoder data, obtaining the position of a vehicle in a local map by a synchronous positioning and map construction method (SLAM), obtaining the position of the vehicle in a latitude and longitude system by GPS positioning, unifying the two positions in a geodetic coordinate system by coordinate conversion, and fusing the two position information by using a Kalman filter to obtain the accurate estimation of the position of the vehicle. The invention realizes accurate global positioning of the vehicle based on the vehicle-mounted encoder, the laser radar and the GPS.

Description

intelligent vehicle positioning method based on multi-sensor information fusion
Technical Field
The invention relates to the field of intelligent vehicle positioning methods, in particular to an intelligent vehicle positioning method based on multi-sensor information fusion.
background
one of the prerequisites of the intelligent vehicle for autonomous driving or auxiliary driving is to accurately know the position of the intelligent vehicle. In the field of autonomous driving or assisted driving of current intelligent vehicles, the most used positioning method is based on GPS positioning or GPS and odometer fusion for positioning. The GPS equipment is used for positioning independently, the positioning often has a drifting phenomenon due to the error of the equipment, and meanwhile, in some occasions such as tunnels and urban areas with high buildings, GPS signals can be weakened or even lost rapidly. GPS and odometer fuse the location, can more tend to use the odometer to calculate the location when GPS signal is weak, and then reduce and lean on the error that GPS location arouses, but because the odometer has accumulative error and drift in the vehicle driving process, the odometer calculation location is also inaccurate.
the method uses synchronous positioning and map construction technology, uses radar data to correct mileage data to obtain the local position estimation of the intelligent vehicle, combines GPS positioning, and fuses coordinate conversion and Kalman filtering to obtain accurate position estimation of the intelligent vehicle. Compared with the fusion positioning of the GPS and the odometer, the method can better complete the positioning of the intelligent vehicle and provide guarantee for the automatic driving or the auxiliary driving of the intelligent vehicle.
Disclosure of Invention
the invention aims to provide an intelligent vehicle positioning method based on multi-sensor information fusion, and aims to solve the problem that GPS positioning is not accurate in the field of autonomous driving or assisted driving of intelligent vehicles in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the utility model provides an intelligent car positioning method based on multisensor information fusion which characterized in that: the method comprises the following steps:
(1) The method comprises the steps that data collected by a photoelectric encoder and a laser radar based on an intelligent vehicle are obtained, and position information of the intelligent vehicle in a local map constructed by a synchronous positioning and map construction method (SLAM) is obtained;
(2) Acquiring the position information of the intelligent vehicle in a geographic coordinate system through the GPS positioning of the intelligent vehicle;
(3) Unifying the position information of the intelligent vehicle obtained in the step (1) and the step (2) in a local map coordinate system and a geographic coordinate system in a geodetic coordinate system through coordinate conversion;
(4) And (3) fusing the position information of the intelligent vehicle obtained in the step (1) and the position information of the intelligent vehicle obtained in the step (2) by adopting a Kalman filtering technology based on the coordinate conversion process in the step (3) to obtain accurate position estimation of the intelligent vehicle.
the intelligent vehicle positioning method based on multi-sensor information fusion is characterized in that: the specific process of the step (1) is as follows:
(1.1) with the center of the intelligent vehicle body as the origin O when the intelligent vehicle is startedoX in the forward direction of the vehicleoAxial direction, perpendicular to XoThe left side of the axle pointing to the vehicle body is Yoaxial direction, ZoThe axes are oriented vertically to establish the odometer coordinate system. Because only the motion of the intelligent vehicle in the horizontal plane is considered and no pitching and rolling actions are taken, the Z-axis coordinate is always 0 and only the change of the yaw angle is taken, and therefore the pose model (x, y and theta) of the intelligent vehicle can be obtained. Through the photoelectric encoder of the left and right wheels of the rear wheel of the intelligent vehicle, the intelligent vehicle odometer information is calculated, and the specific process is as follows:
(1.1.1) and 0 moment, and the position and posture (x) of the intelligent vehicle in the coordinate system of the odometer0,y00) Is (0,0,0).
(1.1.2) and within unit sampling time, obtaining the increment Delta S of the wheel position under the coordinate system of the odometer by the following formula:
where, Delta Q is the pulse increment output by the photoelectric code disc, D is the wheel diameter, and TsThe sampling time is N, the total number of the gratings on the photoelectric code disc is N, and the deceleration rate of the photoelectric encoder is K;
(1.1.3), the distance between the left wheel and the right wheel is omega, and the increment of the position of the left wheel and the position of the right wheel in unit sampling time is respectively delta SLand Δ SRThen the automobile starts from the time point k-1Sk-1=(xk-1,yk-1k-1) To time k Sk=(xk,ykk) Change in pose of
xk=xk-1+ΔDk cos(θk-1+Δθk)
yk=yk-1+ΔDk sin(θk-1+Δθk)
θk=θk-1+Δθk
(1.1.4) accumulating and obtaining the odometer information of the intelligent vehicle at the moment k by using the method;
(1.2) acquiring and recording observation information of the surrounding environment of the intelligent vehicle at the moment k through a vehicle-mounted laser radar;
and (1.3) establishing a map coordinate system which is coaxial with the odometer coordinate system and the origin, and carrying out composition and positioning by using a SLAM algorithm under the map coordinate system. And (3) obtaining optimized pose of the intelligent vehicle under a map coordinate system by using the SLAM algorithm based on a particle filter through using the odometer information obtained in the step (1.1) as control input information of the SLAM algorithm and the radar scanning information obtained in the step (1.2) as observation information of the SLAM algorithm, and simultaneously, establishing a map according to the optimized pose and the obstacle information around the intelligent vehicle at the moment k.
The intelligent vehicle positioning method based on multi-sensor information fusion is characterized in that: in the step (1.3), when the particle filter is used for optimization to obtain a more optimal pose and a map of the intelligent vehicle, the pose of the intelligent vehicle at the current moment is calculated according to the pose of the intelligent vehicle at the previous moment. Knowing the pose x of the intelligent vehicle at the moment k-1k-1Odometer information u, which is control information at time kkobservation information z at time kkestimating x of the current pose of the smart carka posterior probability of (d);
In the particle filter, a sequential importance sampling algorithm and a sampling importance resampling algorithm are adopted to obtain posterior probability distribution, the posterior probability distribution is approximately obtained by weighting the sampling particles of the reference distribution,
The method specifically comprises the following steps:
(1.3.1) initializing a system, setting N particles, wherein each particle represents the current pose of the intelligent vehicle, and samplingcalculating weights
(1.3.2) sampling importance, and at the moment k, according to the control information u of the intelligent vehicle at the moment kkcalculating the position and attitude distribution of the intelligent vehicle at the moment k, and sampling each particle
(1.3.3) observation information z from time kkand the weight of the particle at the previous moment, and calculating the weight of the particle at the k momentImportance weightNormalizing the weight;
(1.3.4), resampling, increasing the particles with high importance weight, deleting the particles with low importance weight, and calculating the posterior probability.
the intelligent vehicle positioning method based on multi-sensor information fusion is characterized in that: in the step (2), the original latitude, longitude and altitude coordinates of the intelligent vehicle in the WGS-84 geographic coordinate system are obtained through the GPS positioning of the intelligent vehicle.
The intelligent vehicle positioning method based on multi-sensor information fusion is characterized in that: the coordinate conversion unification process in the step (3) is as follows:
(3.1) converting the original latitude and longitude of the intelligent vehicle in a WGS-84 coordinate system into a plane coordinate system by adopting a general Gaussian projection, converting the original latitude and longitude into a 'northeast sky' geodetic coordinate system taking the vehicle body center of the initial position of the intelligent vehicle as an origin, and taking the coordinate system as a main coordinate system;
(3.2) converting a map coordinate system in the SLAM algorithm of the intelligent vehicle into a geodetic coordinate system, wherein the specific conversion method comprises the following steps: when the intelligent vehicle is started, the longitude, latitude and course angle of the intelligent vehicle at the starting moment are obtained through the GPS equipmentObtaining the coordinate (x) of the intelligent vehicle in the geodetic coordinate system at the moment through the method in the step (3.1)0,y0) Then point (x) in the map coordinate systemm,ym) Coordinates (x) in the geodetic coordinate systemp,yp) Comprises the following steps:
The intelligent vehicle positioning method based on multi-sensor information fusion is characterized in that: in the Kalman filtering model in the step (4), the pose of the intelligent vehicle in the geodetic coordinate system obtained in the step (1) through the SLAM algorithm is used as a predicted value; and (2) using the pose of the intelligent vehicle under the geodetic coordinate system obtained by the GPS as an observed value.
The observation equations defining the state space transition equations and the state space are as follows:
X(k)=AX(k-1)+W(k),
Z(k)=HX(k)+V(k),
Wherein X (k) is a pose state vector at the moment k, A is a process matrix, H is a measurement matrix, W (k) is a process noise matrix, the covariance of the process noise matrix is Q, V (k) is measurement noise, and the covariance of the process noise matrix is R. The process of kalman filtering is as follows:
The prediction equation set is:
X(k|k-1)=AX(k-1|k-1),
P(k|k-1)=AP(k-1|k-1)A’+Q,
updating the equation set:
X(kk)=X(k|k-1)+Kg(k)(Z(k)-HX(k|k-1)),
P(k|k)=(1-Kg(k)H)P(k|k-1),
Wherein:
X (k | k-1) is an estimated pose state vector at the moment k;
X (k-1| k-1) is the optimal pose state at the moment of k-1;
Kg (k) is the Kalman gain at time k;
x (k | k) is the optimal pose state estimate at time k;
q is the covariance of the system process noise W (k);
r is the covariance of the system measurement noise V (k).
the intelligent vehicle positioning method based on multi-sensor information fusion is characterized in that: the values of the state noise matrix W (k) and the measurement noise matrix V (k) are both Gaussian white noise, wherein the measurement noise matrix V (k) is adaptively adjusted according to the positioning quality of the GPS equipment.
the method uses synchronous positioning and map construction technology, uses radar data to correct mileage data to obtain local optimized position estimation of the intelligent vehicle, combines GPS positioning, and fuses coordinate conversion and Kalman filtering to obtain accurate position estimation of the intelligent vehicle. Compared with the fusion positioning of the GPS and the odometer, the method can better complete the positioning of the intelligent vehicle and provide guarantee for the automatic driving or the auxiliary driving of the intelligent vehicle.
compared with the prior art, the invention has the beneficial effects that:
Compared with the existing intelligent vehicle positioning method, the intelligent vehicle positioning method based on laser radar, encoder and GPS multi-information fusion uses an advanced synchronous positioning and construction method and an information fusion method under the condition of using more sensor laser radars, effectively improves the positioning precision of the intelligent vehicle, and provides guarantee for automatic driving or auxiliary driving of the intelligent vehicle.
Drawings
FIG. 1 is a flow chart of an intelligent vehicle positioning method based on multi-sensor information fusion in accordance with the present invention.
Fig. 2 is a block diagram of a synchronized positioning and mapping method used in the present invention.
Fig. 3 is a diagram of the smart car sensor architecture of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
as shown in fig. 1, an intelligent vehicle positioning method based on multi-sensor information fusion includes the following steps:
(1) The method comprises the steps that data collected by a photoelectric encoder and a laser radar based on an intelligent vehicle are obtained, and position information of the intelligent vehicle in a local map constructed by a synchronous positioning and map construction method (SLAM) is obtained;
(2) Acquiring the position information of the intelligent vehicle in a geographic coordinate system through the GPS positioning of the intelligent vehicle;
(3) Unifying the position information of the intelligent vehicle obtained in the step (1) and the step (2) in a local map coordinate system and a geographic coordinate system in a geodetic coordinate system through coordinate conversion;
(4) And (3) fusing the position information of the intelligent vehicle obtained in the step (1) and the position information of the intelligent vehicle obtained in the step (2) by adopting a Kalman filtering technology based on the coordinate conversion process in the step (3) to obtain accurate position estimation of the intelligent vehicle.
for the synchronous positioning and map building method described in step 1, the specific flow is shown in fig. 2, and meanwhile, according to fig. 3, the laser radar is installed in the center of the head of the smart car, the encoder is installed on the axle of the left and right wheels of the rear wheel, and the algorithm specifically includes the following steps:
(1.1) with the center of the intelligent vehicle body as the origin O when the intelligent vehicle is startedox in the forward direction of the vehicleoAxial direction, perpendicular to Xothe left side of the axle pointing to the vehicle body is YoAxial direction, ZoThe axes are oriented vertically to establish the odometer coordinate system. Because only the motion of the intelligent vehicle in the horizontal plane is considered and no pitching and rolling actions are taken, the Z-axis coordinate is always 0 and only the change of the yaw angle is taken, and therefore the pose model (x, y and theta) of the intelligent vehicle can be obtained. Through the photoelectric encoder of the left and right wheels of the rear wheel of the intelligent vehicle, the intelligent vehicle odometer information is calculated, and the specific process is as follows:
(1.1.1) and 0 moment, and the position and posture (x) of the intelligent vehicle in the coordinate system of the odometer0,y00) Is (0,0,0).
(1.1.2) and within unit sampling time, obtaining the increment Delta S of the wheel position under the coordinate system of the odometer by the following formula:
Where, Delta Q is the pulse increment output by the photoelectric code disc, D is the wheel diameter, and Tsthe sampling time is N, the total number of the gratings on the photoelectric code disc is N, and the deceleration rate of the photoelectric encoder is K;
(1.1.3), the distance between the left wheel and the right wheel is omega, and the increment of the position of the left wheel and the position of the right wheel in unit sampling time is respectively delta SLAnd Δ SRthen the car starts from the moment k-1Sk-1=(xk-1,yk-1k-1) To time k Sk=(xk,ykk) Change in pose of
xk=xk-1+ΔDk cos(θk-1+Δθk)
yk=yk-1+ΔDk sin(θk-1+Δθk)
θk=θk-1+Δθk
(1.1.4) accumulating and obtaining the odometer information of the intelligent vehicle at the moment k by using the method;
(1.2) acquiring and recording observation information of the surrounding environment of the intelligent vehicle at the moment k through a vehicle-mounted laser radar;
And (1.3) establishing a map coordinate system which is coaxial with the odometer coordinate system and the origin, and carrying out composition and positioning by using a SLAM algorithm under the map coordinate system. And (3) obtaining optimized pose of the intelligent vehicle under a map coordinate system by using the SLAM algorithm based on a particle filter through using the odometer information obtained in the step (1.1) as control input information of the SLAM algorithm and the radar scanning information obtained in the step (1.2) as observation information of the SLAM algorithm, and simultaneously, establishing a map according to the optimized pose and the obstacle information around the intelligent vehicle at the moment k.
In the step (1.3), when the particle filter is used for optimization to obtain a more optimal pose and a map of the intelligent vehicle, the pose of the intelligent vehicle at the current moment is calculated according to the pose of the intelligent vehicle at the previous moment. Knowing the pose x of the intelligent vehicle at the moment k-1k-1odometer information u, which is control information at time kkObservation information z at time kkEstimating x of the current pose of the smart carka posterior probability of (d);
in the particle filter, a sequential importance sampling algorithm and a sampling importance resampling algorithm are adopted to obtain posterior probability distribution, the posterior probability distribution is approximately obtained by weighting the sampling particles of the reference distribution,
The method specifically comprises the following steps:
(1.3.1) initializing a system, setting N particles, wherein each particle represents the current pose of the intelligent vehicle, and samplingCalculating weights
(1.3.2) sampling importance, and at the moment k, according to the control information u of the intelligent vehicle at the moment kkcalculating the position and attitude distribution of the intelligent vehicle at the moment k, and sampling each particle
(1.3.3) observation information z from time kkand the weight of the previous moment of the particle, and calculating the importance weight of the particle at the moment kNormalizing the weight;
(1.3.4), resampling, increasing the particles with high importance weight, deleting the particles with low importance weight, and calculating the posterior probability.
in the step (2), the original latitude, longitude and altitude coordinates of the intelligent vehicle in the WGS-84 geographic coordinate system are obtained through the GPS positioning of the intelligent vehicle.
The coordinate conversion unification process in the step (3) is as follows:
(3.1) converting the original latitude and longitude of the intelligent vehicle in a WGS-84 coordinate system into a plane coordinate system by adopting a general Gaussian projection, converting the original latitude and longitude into a 'northeast sky' geodetic coordinate system taking the vehicle body center of the initial position of the intelligent vehicle as an origin, and taking the coordinate system as a main coordinate system;
(3.2) in the SLAM algorithm of the intelligent vehiclethe map coordinate system of (2) is converted into a geodetic coordinate system by the following specific conversion method: when the intelligent vehicle is started, the longitude, latitude and course angle of the intelligent vehicle at the starting moment are obtained through the GPS equipmentObtaining the coordinate (x) of the intelligent vehicle in the geodetic coordinate system at the moment through the method in the step (3.1)0,y0) Then point (x) in the map coordinate systemm,ym) Coordinates (x) in the geodetic coordinate systemp,yp) Comprises the following steps:
in the Kalman filtering model in the step (4), the pose of the intelligent vehicle in the geodetic coordinate system obtained in the step (1) through the SLAM algorithm is used as a predicted value; and (2) using the pose of the intelligent vehicle under the geodetic coordinate system obtained by the GPS as an observed value.
The observation equations defining the state space transition equations and the state space are as follows:
X(k)=AX(k-1)+W(k),
Z(k)=HX(k)+V(k),
wherein X (k) is a pose state vector at the moment k, A is a process matrix, H is a measurement matrix, W (k) is a process noise matrix, the covariance of the process noise matrix is Q, V (k) is measurement noise, and the covariance of the process noise matrix is R. The process of kalman filtering is as follows:
the prediction equation set is:
X(k|k-1)=AX(k-1|k-1),
P(k|k-1)=AP(k-1|k-1)A’+Q,
updating the equation set:
X(k|k)=X(k|k-1)+Kg(k)(Z(k)-HX(k|k-1)),
P(k|k)=(1-Kg(k)H)P(k|k-1),
wherein:
x (k | k-1) is an estimated pose state vector at the moment k;
X (k-1| k-1) is the optimal pose state at the moment of k-1;
kg (k) is the Kalman gain at time k;
x (k | k) is the optimal pose state estimate at time k;
Q is the covariance of the system process noise W (k);
R is the covariance of the system measurement noise V (k).
The values of the state noise matrix W (k) and the measurement noise matrix V (k) are both Gaussian white noise, wherein the measurement noise matrix V (k) is adaptively adjusted according to the positioning quality of the GPS equipment.

Claims (4)

1. The utility model provides an intelligent car positioning method based on multisensor information fusion which characterized in that: the method comprises the following steps:
(1) the method comprises the steps that data collected by a photoelectric encoder and a laser radar based on an intelligent vehicle are utilized, and the position information of the intelligent vehicle in a local map constructed by a synchronous positioning and map construction method SLAM is obtained;
The specific process of the step (1) is as follows:
(1.1) with the center of the intelligent vehicle body as the origin O when the intelligent vehicle is startedox in the forward direction of the vehicleoAxial direction, perpendicular to XoThe left side of the axle pointing to the vehicle body is Yoaxial direction, ZoEstablishing a coordinate system of the odometer in the axial vertical direction; only the motion of the intelligent vehicle in the horizontal plane is considered, and no pitching and rolling actions are generated, so that the Z-axis coordinate is always 0 and only the change of the yaw angle is generated, and the pose model (x, y and theta) of the intelligent vehicle can be obtained; through the photoelectric encoder of the left and right wheels of the rear wheel of the intelligent vehicle, the intelligent vehicle odometer information is calculated, and the specific process is as follows:
(1.1.1) and 0 moment, and the position and posture (x) of the intelligent vehicle in the coordinate system of the odometer0,y00) Is (0,0, 0);
(1.1.2) unit sampling time, inThe increment Δ S of the wheel position in the odometer coordinate system is obtained by the following formula:
where, Delta Q is the pulse increment output by the photoelectric code disc, D is the wheel diameter, and TsThe sampling time is N, the total number of the gratings on the photoelectric code disc is N, and the deceleration rate of the photoelectric encoder is K;
(1.1.3), the distance between the left wheel and the right wheel is omega, and the increment of the position of the left wheel and the position of the right wheel in unit sampling time is respectively delta SLAnd Δ SRthen the automobile starts from the time point k-1Sk-1=(xk-1,yk-1k-1) To time k Sk=(xk,ykk) The pose change of (1) is:
xk=xk-1+ΔDk cos(θk-1+Δθk),
yk=yk-1+ΔDksin(θk-1+Δθk),
θk=θk-1+Δθk
(1.1.4) accumulating and obtaining the odometer information of the intelligent vehicle at the moment k by using the method;
(1.2) acquiring and recording observation information of the surrounding environment of the intelligent vehicle at the moment k through a vehicle-mounted laser radar;
(1.3) establishing a local map coordinate system which is coaxial with the milemeter coordinate system and the origin, and carrying out composition and positioning by using an SLAM algorithm under the local map coordinate system; the odometer information obtained in the step (1.1) is used as control input information of a SLAM algorithm, the radar scanning information obtained in the step (1.2) is used as observation information of the SLAM algorithm, an optimized pose of the intelligent vehicle under a map coordinate system is obtained by using the SLAM algorithm based on a particle filter, and meanwhile, the obstacle information around the intelligent vehicle at the moment k is added into a local map according to the optimized pose;
When the particle filter is used for optimizing to obtain a more optimal pose and a map of the intelligent vehicle, the pose of the intelligent vehicle at the current moment is calculated according to the pose at the previous moment; knowing the pose x of the intelligent vehicle at the moment k-1k-1odometer information u, which is control information at time kkObservation information z at time kkEstimating x of the current pose of the smart carka posterior probability of (d);
in the particle filter, a sequential importance sampling algorithm and a sampling importance resampling algorithm are adopted to obtain posterior probability distribution, the posterior probability distribution is approximately obtained by weighting the sampling particles of the reference distribution,
The method specifically comprises the following steps:
(1.3.1) initializing a system, setting N particles, wherein each particle represents the current pose of the intelligent vehicle, and samplingcalculating weights
(1.3.2) sampling importance, and at the moment k, according to the control information u of the intelligent vehicle at the moment kkcalculating the position and attitude distribution of the intelligent vehicle at the moment k, and sampling each particle
(1.3.3) observation information z from time kkand the weight of the previous moment of the particle, and calculating the importance weight of the particle at the moment kNormalizing the weight;
(1.3.4) resampling, increasing particles with high importance weight, deleting particles with low importance weight, and calculating posterior probability;
(2) Acquiring the position information of the intelligent vehicle in a WGS-84 coordinate system through the GPS positioning of the intelligent vehicle;
(3) Unifying the position information of the intelligent vehicle obtained in the step (1) and the step (2) in a local map coordinate system and a WGS-84 coordinate system in a geodetic coordinate system through coordinate conversion;
(4) Fusing the position information of the intelligent vehicle obtained in the step (1) and the position information of the intelligent vehicle obtained in the step (2) by adopting a Kalman filtering technology based on the coordinate conversion process in the step (3) to obtain accurate position estimation of the intelligent vehicle;
In the Kalman filtering model, the pose of the intelligent vehicle in the geodetic coordinate system obtained by the SLAM algorithm in the step (1) is used as a predicted value; step (2) obtaining the pose of the intelligent vehicle in the geodetic coordinate system through a GPS as an observed value;
the observation equations defining the state space transition equations and the state space are as follows:
X(k)=AX(k-1)+W(k),
Z(k)=HX(k)+V(k),
wherein X (k) is a pose state vector at the moment k, A is a process matrix, H is a measurement matrix, W (k) is a process noise matrix, the covariance of the process noise matrix is Q, V (k) is measurement noise, and the covariance of the process noise matrix is R; the process of kalman filtering is as follows:
the prediction equation set is:
X(k|k-1)=AX(k-1|k-1),
P(k|k-1)=AP(k-1|k-1)A′+Q,
updating the equation set:
X(k|k)=X(k|k-1)+Kg(k)(Z(k)-HX(k|k-1)),
P(k|k)=(1-Kg(k)H)P(k|k-1),
wherein:
X (k | k-1) is an estimated pose state vector at the moment k;
X (k-1| k-1) is the optimal pose state at the moment of k-1;
Kg (k) is the Kalman gain at time k;
X (k | k) is the optimal pose state estimate at time k;
Q is the covariance of the system process noise W (k);
R is the covariance of the system measurement noise V (k).
2. the intelligent vehicle positioning method based on multi-sensor information fusion according to claim 1, characterized in that: in the step (2), the original latitude, longitude and altitude coordinates of the intelligent vehicle in the WGS-84 geographic coordinate system are obtained through the GPS positioning of the intelligent vehicle.
3. The intelligent vehicle positioning method based on multi-sensor information fusion according to claim 1, characterized in that: the coordinate conversion unification process in the step (3) is as follows:
(3.1) converting the original latitude and longitude of the intelligent vehicle in a WGS-84 coordinate system into a plane coordinate system by adopting a general Gaussian projection, converting the original latitude and longitude into a geodetic coordinate system taking the vehicle body center of the initial position of the intelligent vehicle as an origin, and taking the coordinate system as a main coordinate system;
(3.2) converting a map coordinate system in the SLAM algorithm of the intelligent vehicle into a geodetic coordinate system, wherein the specific conversion method comprises the following steps: when the intelligent vehicle is started, the longitude, latitude and course angle of the intelligent vehicle at the starting moment are obtained through the GPS equipmentObtaining the coordinate (x) of the intelligent vehicle in the geodetic coordinate system at the moment through the method in the step (3.1)0,y0) Then point (x) in the map coordinate systemm,ym) Coordinates (x) in the geodetic coordinate systemp,yp) Comprises the following steps:
4. the intelligent vehicle positioning method based on multi-sensor information fusion according to claim 1, characterized in that: the values of the state noise matrix W (k) and the measurement noise matrix V (k) are both Gaussian white noise, wherein the measurement noise matrix V (k) is adaptively adjusted according to the positioning quality of the GPS equipment.
CN201710131878.2A 2017-03-07 2017-03-07 Intelligent vehicle positioning method based on multi-sensor information fusion Active CN106840179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710131878.2A CN106840179B (en) 2017-03-07 2017-03-07 Intelligent vehicle positioning method based on multi-sensor information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710131878.2A CN106840179B (en) 2017-03-07 2017-03-07 Intelligent vehicle positioning method based on multi-sensor information fusion

Publications (2)

Publication Number Publication Date
CN106840179A CN106840179A (en) 2017-06-13
CN106840179B true CN106840179B (en) 2019-12-10

Family

ID=59137497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710131878.2A Active CN106840179B (en) 2017-03-07 2017-03-07 Intelligent vehicle positioning method based on multi-sensor information fusion

Country Status (1)

Country Link
CN (1) CN106840179B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107478214A (en) * 2017-07-24 2017-12-15 杨华军 A kind of indoor orientation method and system based on Multi-sensor Fusion
CN107577646A (en) * 2017-08-23 2018-01-12 上海莫斐信息技术有限公司 A kind of high-precision track operation method and system
CN108168560A (en) * 2017-12-27 2018-06-15 沈阳智远弘业机器人有限公司 A kind of complex navigation control method for omnidirectional AGV
CN108332750A (en) * 2018-01-05 2018-07-27 深圳市功夫机器人有限公司 Robot localization method and terminal device
CN108398705A (en) * 2018-03-06 2018-08-14 广州小马智行科技有限公司 Ground drawing generating method, device and vehicle positioning method, device
CN110160545B (en) * 2018-03-15 2020-05-05 北京航空航天大学 Enhanced positioning system and method for laser radar and GPS
CN108759833A (en) * 2018-04-25 2018-11-06 中国科学院合肥物质科学研究院 A kind of intelligent vehicle localization method based on priori map
CN109031373B (en) * 2018-06-08 2020-12-18 北京航天光华电子技术有限公司 Navigation system and method for transformer substation inspection robot
CN109061703A (en) 2018-06-11 2018-12-21 百度在线网络技术(北京)有限公司 Method, apparatus, equipment and computer readable storage medium used for positioning
CN109059906B (en) * 2018-06-26 2020-09-29 上海西井信息科技有限公司 Vehicle positioning method and device, electronic equipment and storage medium
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods
CN109270545B (en) * 2018-10-23 2020-08-11 百度在线网络技术(北京)有限公司 Positioning true value verification method, device, equipment and storage medium
CN109671120A (en) * 2018-11-08 2019-04-23 南京华捷艾米软件科技有限公司 A kind of monocular SLAM initial method and system based on wheel type encoder
CN109733384B (en) * 2018-12-25 2021-04-30 科大讯飞股份有限公司 Parking path setting method and system
CN109859613A (en) * 2019-01-18 2019-06-07 驭势(上海)汽车科技有限公司 A kind of high-precision cartography method and mobile unit based on projected coordinate system
CN110187375A (en) * 2019-06-27 2019-08-30 武汉中海庭数据技术有限公司 A kind of method and device improving positioning accuracy based on SLAM positioning result

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101846734B (en) * 2009-03-26 2012-08-22 中国农业大学 Agricultural machinery navigation and position method and system and agricultural machinery industrial personal computer
CN101625572B (en) * 2009-08-10 2011-05-11 浙江大学 FastSLAM algorithm based on improved resampling method and particle selection
CN105973145B (en) * 2016-05-19 2021-02-05 深圳市速腾聚创科技有限公司 Mobile three-dimensional laser scanning system and mobile three-dimensional laser scanning method

Also Published As

Publication number Publication date
CN106840179A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106908775B (en) A kind of unmanned vehicle real-time location method based on laser reflection intensity
Vivacqua et al. Self-localization based on visual lane marking maps: An accurate low-cost approach for autonomous driving
CN105206108B (en) A kind of vehicle collision prewarning method based on electronic map
CN104121905B (en) Course angle obtaining method based on inertial sensor
Gu et al. GNSS/onboard inertial sensor integration with the aid of 3-D building map for lane-level vehicle self-localization in urban canyon
Rose et al. An integrated vehicle navigation system utilizing lane-detection and lateral position estimation systems in difficult environments for GPS
CN107246876B (en) Method and system for autonomous positioning and map construction of unmanned automobile
JP6432116B2 (en) Vehicle position specifying device, vehicle control system, vehicle position specifying method, and vehicle position specifying program
Georgy et al. Enhanced MEMS-IMU/odometer/GPS integration using mixture particle filter
CN101476894B (en) Vehicle-mounted SINS/GPS combined navigation system performance reinforcement method
US7522091B2 (en) Road curvature estimation system
US8452535B2 (en) Systems and methods for precise sub-lane vehicle positioning
Rauch et al. Car2x-based perception in a high-level fusion architecture for cooperative perception systems
Bonnifait et al. Data fusion of four ABS sensors and GPS for an enhanced localization of car-like vehicles
Bevly et al. The use of GPS based velocity measurements for measurement of sideslip and wheel slip
CN105588563A (en) Joint calibration method of binocular camera and inertial navigation unit in automatic driving
CN102928816B (en) High-reliably integrated positioning method for vehicles in tunnel environment
Huang et al. A low-order DGPS-based vehicle positioning system under urban environment
CN108073170B (en) Automated collaborative driving control for autonomous vehicles
US7096116B2 (en) Vehicle behavior detector, in-vehicle processing system, detection information calibrator, and in-vehicle processor
CN1090314C (en) Movement detector
US8510044B2 (en) Position sensing device and method
CN104764457A (en) Urban environment composition method for unmanned vehicles
CN104833360B (en) A kind of conversion method of two-dimensional coordinate to three-dimensional coordinate
CN104635233B (en) Objects in front state estimation and sorting technique based on vehicle-mounted millimeter wave radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant