CN114440881A - Unmanned vehicle positioning method integrating multi-source sensor information - Google Patents
Unmanned vehicle positioning method integrating multi-source sensor information Download PDFInfo
- Publication number
- CN114440881A CN114440881A CN202210109679.2A CN202210109679A CN114440881A CN 114440881 A CN114440881 A CN 114440881A CN 202210109679 A CN202210109679 A CN 202210109679A CN 114440881 A CN114440881 A CN 114440881A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- model
- matrix
- algorithm
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 77
- 230000004927 fusion Effects 0.000 claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims description 60
- 239000002245 particle Substances 0.000 claims description 39
- 238000001914 filtration Methods 0.000 claims description 36
- 230000001133 acceleration Effects 0.000 claims description 28
- 239000013598 vector Substances 0.000 claims description 28
- 230000014509 gene expression Effects 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 20
- 230000000694 effects Effects 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 8
- 238000006073 displacement reaction Methods 0.000 claims description 6
- 230000007704 transition Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000000342 Monte Carlo simulation Methods 0.000 claims description 2
- 238000012952 Resampling Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
Abstract
Description
技术领域technical field
本发明涉及自动驾驶技术领域,尤其涉及一种融合多源传感器信息的无人车定位方法。The invention relates to the technical field of automatic driving, in particular to an unmanned vehicle positioning method integrating multi-source sensor information.
背景技术Background technique
近年随着中国经济增长,汽车已经不仅仅只是便捷的交通工具,汽车的智能网联化已经成为人们的另一种需求,因此无人车由于其便携性和可拓展性受到广大科研工作者和产业界的关注。其中无人车定位是研究无人车领域的基础模块,使用不同的感知传感器需要采用不同的研究方法。目前市面上单纯地使用GPS完成定位工作的方案,其特点是全天候,覆盖区域大,然而面对卫星信号受到干扰时,会产生很大误差,另外单纯使用IMU完成定位工作地方案由于此技术为相对定位技术,其定位误差会随着时间地累积越来越大。In recent years, with the growth of China's economy, cars have become more than just convenient means of transportation, and the intelligent networking of cars has become another demand of people. industry attention. Among them, the positioning of unmanned vehicles is the basic module of the research on the field of unmanned vehicles. The use of different sensing sensors requires different research methods. At present, the solution that simply uses GPS to complete the positioning work is characterized by all-weather and large coverage area. However, when the satellite signal is interfered, a large error will occur. In addition, the solution of simply using the IMU to complete the positioning work is because this technology is Relative positioning technology, its positioning error will accumulate more and more over time.
发明内容SUMMARY OF THE INVENTION
本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种融合多源传感器信息的无人车定位方法。The purpose of the present invention is to provide an unmanned vehicle positioning method integrating multi-source sensor information in order to overcome the above-mentioned defects of the prior art.
本发明的目的可以通过以下技术方案来实现:The object of the present invention can be realized through the following technical solutions:
一种融合多源传感器信息的无人车定位方法,该方法包括以下步骤:An unmanned vehicle positioning method integrating multi-source sensor information, the method includes the following steps:
步骤1:获取待控制车辆的GPS位置信息和IMU信息,并将待控制车辆结合惯性导航系统与GPS信号获得的厘米级精度定位作为真值,以验证融合后的定位精度;Step 1: Obtain the GPS position information and IMU information of the vehicle to be controlled, and use the centimeter-level precision positioning obtained by the vehicle to be controlled in combination with the inertial navigation system and the GPS signal as the true value to verify the fusion positioning accuracy;
步骤2:根据GPS位置信息和IMU信息建立线性模型;Step 2: Establish a linear model according to GPS location information and IMU information;
步骤3:获取线性模型下的融合多源传感器信息的无人车定位算法,并得到线性模型下的融合定位结果;Step 3: Obtain the unmanned vehicle positioning algorithm fused with multi-source sensor information under the linear model, and obtain the fusion positioning result under the linear model;
步骤4:根据GPS位置信息和IMU信息建立非线性模型;Step 4: Build a nonlinear model according to GPS location information and IMU information;
步骤5:获取非线性模型下的融合多源传感器信息的无人车定位算法,并得到非线性模型下的融合定位结果;Step 5: Obtain the unmanned vehicle positioning algorithm fused with multi-source sensor information under the nonlinear model, and obtain the fusion positioning result under the nonlinear model;
步骤6:对非线性模型进一步加深非线性并重新建模得到单车模型;Step 6: Further deepen the nonlinearity of the nonlinear model and re-model to obtain a bicycle model;
步骤7:获取单车模型下的融合多源传感器信息的无人车定位算法,并得到单车模型下的融合定位结果。Step 7: Obtain the unmanned vehicle positioning algorithm fused with multi-source sensor information under the bicycle model, and obtain the fusion positioning result under the bicycle model.
所述的步骤1中,通过安装在车顶的GNSS天线收集GPS信息,通过惯性导航单元收集IMU信息,所述的GPS信息包括车辆的经纬度信息,且车辆的经纬度信可转换为车辆的横坐标和纵坐标,所述的IMU信息包括车辆的三轴角速度和加速度。In the described step 1, the GPS information is collected through the GNSS antenna installed on the roof, and the IMU information is collected through the inertial navigation unit. The GPS information includes the latitude and longitude information of the vehicle, and the latitude and longitude information of the vehicle can be converted into the abscissa of the vehicle. and ordinate, the IMU information includes the triaxial angular velocity and acceleration of the vehicle.
所述的步骤2中,线性模型包括恒定速度模型与恒定加速度模型,其中,恒定速度模型的状态向量的表达式为:In the described
其中,x、y、θ和v分别为t时刻车辆位置的横坐标、纵坐标、车辆航向角和速度;Among them, x, y, θ and v are the abscissa, ordinate, vehicle heading angle and speed of the vehicle position at time t, respectively;
恒定速度模型的状态方程和观测方程的表达式分别为:The expressions of the state equation and observation equation of the constant velocity model are:
其中,Δt为时间间隔,为经过Δt时间后的状态向量,为由GPS得到的观测值,xgps为由GPS得到的车辆位置的横坐标,ygps为由GPS得到的车辆位置的纵坐标;where Δt is the time interval, is the state vector after Δt time, is the observation value obtained by GPS, x gps is the abscissa of the vehicle position obtained by GPS, and y gps is the ordinate of the vehicle position obtained by GPS;
恒定加速度模型的状态向量的表达式为:The expression for the state vector of the constant acceleration model is:
其中,s和v分别为t时刻车辆的位移和速度;Among them, s and v are the displacement and velocity of the vehicle at time t, respectively;
恒定加速度模型的状态方程和观测方程的表达式分别为:The expressions of the state equation and observation equation of the constant acceleration model are:
其中,a为车辆的加速度,为经过Δt时间后的状态向量,s(t)为t时刻车辆的位移,v(t)为t时刻车辆的速度,为观测值,即GPS测量得到的位移;where a is the acceleration of the vehicle, is the state vector after Δt time, s(t) is the displacement of the vehicle at time t, v(t) is the speed of the vehicle at time t, is the observation value, that is, the displacement obtained by GPS measurement;
恒定加速度模型将加速度计测量的数值作为系统输入:The constant acceleration model takes the values measured by the accelerometer as system input:
u(k)=a(k)u(k)=a(k)
其中,u(k)为系统输入,a(k)为加速度计测量的数值。Among them, u(k) is the system input, and a(k) is the value measured by the accelerometer.
所述的步骤3中,线性模型下的融合多源传感器信息的无人车定位方法具体为:In the
步骤301:根据卡尔曼滤波算法得到更为准确的无人车位置信息,卡尔曼滤波算法具体过程分为状态预测、误差协方差预测、卡尔曼增益更新、状态更新与误差协方差更新五个环节:Step 301: Obtain more accurate location information of the unmanned vehicle according to the Kalman filter algorithm. The specific process of the Kalman filter algorithm is divided into five links: state prediction, error covariance prediction, Kalman gain update, state update and error covariance update :
其中,表示k时刻状态向量的预测值,表示k时刻状态向量的最优估计值,表示k时刻误差协方差的预测值,Pk表示k时刻误差协方差的最优估计值,Kk表示k时刻的卡尔曼增益,Q为过程噪声协方差矩阵,R为观测噪声协方差矩阵,uk-1表示k-1时刻的系统输入,A表示系统的状态转移矩阵,H表示系统的观测矩阵,I表示单位矩阵,B为控制输入矩阵;in, represents the predicted value of the state vector at time k, represents the optimal estimate of the state vector at time k, is the predicted value of the error covariance at time k, P k is the optimal estimate of the error covariance at time k, K k is the Kalman gain at time k, Q is the process noise covariance matrix, R is the observation noise covariance matrix, u k-1 represents the system input at time k-1, A represents the state transition matrix of the system, H represents the observation matrix of the system, I represents the identity matrix, and B represents the control input matrix;
对于恒定速度模型,根据恒定速度模型的状态方程和观测方程得到观测矩阵H与状态转移矩阵A的表达式分别为:For the constant speed model, the expressions of the observation matrix H and the state transition matrix A are obtained according to the state equation and the observation equation of the constant speed model:
其中,Δt为时间间隔,θ为车辆的航向角;Among them, Δt is the time interval, and θ is the heading angle of the vehicle;
将协方差矩阵P初始化为单位矩阵,将过程噪声矩阵Q与观测噪声矩阵R初始化:Initialize the covariance matrix P to the identity matrix, and initialize the process noise matrix Q and the observation noise matrix R:
对于恒定加速度模型,根据恒定加速度模型的状态方程、观测值和系统输入得到状态转移矩阵A、观测矩阵H和控制输入矩阵B的表达式分别为:For the constant acceleration model, the expressions of state transition matrix A, observation matrix H and control input matrix B are obtained according to the state equation, observation value and system input of the constant acceleration model:
H=[1 0]H=[1 0]
其中,Δt为时间间隔;Among them, Δt is the time interval;
将协方差矩阵P初始化为单位矩阵,过程噪声矩阵Q与观测噪声矩阵R初始化分别为:The covariance matrix P is initialized as the identity matrix, and the process noise matrix Q and the observation noise matrix R are initialized as:
R=1;r = 1;
步骤302:将得到的最优估计值与真值进行对比,并计算滤波前后的方差。Step 302: Compare the obtained optimal estimated value with the true value, and calculate the variance before and after filtering.
所述的步骤4中,非线性模型为基于恒定加速度与角速度的运动学模型,运动学模型的状态向量的表达式为:In the described step 4, the nonlinear model is a kinematic model based on constant acceleration and angular velocity, and the expression of the state vector of the kinematic model is:
其中,xk、yk、ψk、ωk、vk和ak分别为k时刻车辆的横坐标、纵坐标、航向角、角速度、速度以及加速度;where x k , y k , ψ k , ω k , v k and ak are the abscissa, ordinate, heading angle, angular velocity, velocity and acceleration of the vehicle at time k , respectively;
运动学模型的状态方程和观测方程的表达式分别为:The expressions of the state equation and observation equation of the kinematic model are:
其中,为k-1时刻车辆的状态向量,vk-1为k-1时刻车辆的速度,dt为时间间隔,为观测值,xg,k、yg,k、aa,k和ωg,k分别为k时刻GPS观测得到的车辆横坐标、纵坐标、加速度计获得加速度以及陀螺仪测得的角速度.in, is the state vector of the vehicle at time k-1, v k-1 is the speed of the vehicle at time k-1, dt is the time interval, are the observed values, x g,k , y g,k , a a,k and ω g,k are the abscissa and ordinate of the vehicle, the acceleration obtained by the accelerometer, and the angular velocity measured by the gyroscope at time k, respectively.
所述的步骤5中,获取运动学模型下的融合多源传感器信息的无人车定位方法的过程具体包括以下步骤:In the
步骤501:分别采用拓展卡尔曼算法、无迹卡尔曼算法与粒子滤波算法进行融合定位;Step 501: respectively adopt the extended Kalman algorithm, the unscented Kalman algorithm and the particle filter algorithm to perform fusion positioning;
步骤502:将三种滤波算法进行综合比较,计算RMS误差,以选取最优的定位算法,RMS误差的计算公式为:Step 502: Comprehensively compare the three filtering algorithms, and calculate the RMS error to select the optimal positioning algorithm. The calculation formula of the RMS error is:
errorrms=|X-Xest|error rms = |XX est |
其中,errorrms为经过滤波后的误差,X为真值,包括车辆的横坐标和纵坐标的真实值,Xest分别为经过滤波后车辆横坐标与纵坐标的最佳估计值;Among them, error rms is the error after filtering, X is the true value, including the true value of the abscissa and ordinate of the vehicle, and Xest is the best estimated value of the abscissa and ordinate of the vehicle after filtering;
步骤503:选取RMS误差最小的对应的滤波算法,并基于该滤波算法获取非线性模型下的融合定位结果。Step 503: Select the corresponding filtering algorithm with the smallest RMS error, and obtain the fusion positioning result under the nonlinear model based on the filtering algorithm.
所述的步骤501中,拓展卡尔曼滤波算法具体的公式包括:In the described step 501, the specific formula of the extended Kalman filter algorithm includes:
其中,Fk-1与Hk分别为雅各比矩阵,uk为k时刻的系统输入,xk-1为k-1时刻的状态向量;Among them, F k-1 and H k are Jacobian matrices respectively, uk is the system input at time k , and x k-1 is the state vector at time k-1;
雅各比矩阵Fk-1和Hk的表达式分别为:The expressions of the Jacobian matrices F k-1 and H k are:
其中,ψk-1为k-1时刻车辆的航向角,vk-1为k-1时刻车辆的速度,dt为时间间隔;Among them, ψ k-1 is the heading angle of the vehicle at time k-1, v k-1 is the speed of the vehicle at time k-1, and dt is the time interval;
无迹卡尔曼滤波算法的具体滤波过程为:The specific filtering process of the unscented Kalman filter algorithm is as follows:
步骤1a:初始化状态向量x0、状态估计误差协方差P0、过程噪声矩阵Q以及观测噪声矩阵R,对于运动学模型,状态向量的初值为真值,过程噪声矩阵Q、观测噪声矩阵R以及初始误差协方差矩阵P0初始化的表达式为:Step 1a: Initialize the state vector x 0 , the state estimation error covariance P 0 , the process noise matrix Q and the observation noise matrix R, for the kinematic model, the initial value of the state vector is the true value, the initialized expressions of the process noise matrix Q, the observation noise matrix R and the initial error covariance matrix P 0 are:
步骤2a:根据上一时刻的状态向量xk和误差协方差矩阵Pk选择当前时刻采样点,并分配权重;Step 2a: According to the state vector x k at the previous moment and the error covariance matrix P k , the sampling point at the current moment is selected, and the weight is assigned;
步骤3a:选择2n+1个采样点,通过系统的状态方程计算当前时刻所有采样点的均值和协方差,完成时间更新;Step 3a: Select 2n+1 sampling points, calculate the mean and covariance of all sampling points at the current moment through the state equation of the system, and complete the time update;
步骤4a:通过系统的观测方程对采样点进行量测更新;Step 4a: measure and update the sampling points through the observation equation of the system;
步骤5a:通过预测与量测更新计算出卡尔曼增益,完成滤波更新;Step 5a: Calculate the Kalman gain through prediction and measurement update, and complete the filter update;
粒子滤波算法采用蒙特卡洛法解决非线性滤波问题,粒子滤波算法的滤波过程为:The particle filter algorithm uses the Monte Carlo method to solve the nonlinear filtering problem. The filtering process of the particle filter algorithm is as follows:
步骤1b:初始化N个粒子且将粒子的权值统一设置为1/N;Step 1b: Initialize N particles and the particle weights Uniformly set to 1/N;
步骤2b:更新粒子计算每个粒子的似然概率将每个粒子的权值更新为并进行归一化处理,粒子的权值更新公式为:Step 2b: Update the particles Calculate the likelihood probability for each particle the weight of each particle update to And normalized, the particle weight update formula is:
其中,为粒子的权值,为粒子的似然概率,zk为k时刻系统的观测值,为k时刻第i个粒子观测方程的输出值,Rk为k时刻的观测噪声协方差矩阵;in, is the weight of the particle, is the likelihood probability of the particle, z k is the observed value of the system at time k, is the output value of the ith particle observation equation at time k, and R k is the observation noise covariance matrix at time k;
步骤3b:对进行重采样,计算有效粒子数Neff,以此得到新的粒子群且粒子的权值为1/N;Step 3b: Right Perform resampling and calculate the effective number of particles N eff to obtain a new particle swarm and the weight of the particle is 1/N;
步骤4b:采用经过重采样后的粒子的权值和状态计算当前时刻的滤波状态估计与方差估计。Step 4b: Calculate the filter state estimation and variance estimation at the current moment by using the weights and states of the resampled particles.
所述的步骤6中,单车模型中的速度与角速度之间满足的关系为:In the step 6, the relationship between the speed and the angular velocity in the bicycle model is:
其中,β为滑移角,为水平方向速度,为垂直方向速度,为航向角的角速度,v为质心车速,ψ为航向角大小,lr与lf分别为后悬长度与前悬长度,δf与δr分别为前轮偏角与后轮偏角;where β is the slip angle, is the horizontal velocity, is the vertical velocity, is the angular velocity of the heading angle, v is the vehicle speed of the center of mass, ψ is the heading angle, l r and l f are the length of the rear overhang and the length of the front overhang, respectively, δ f and δ r are the front wheel declination angle and the rear wheel declination angle respectively;
滑移角β的计算公式为:The formula for calculating the slip angle β is:
其中,lr与lf分别为后悬长度与前悬长度,δf与δr分别为前轮偏角与后轮偏角;Among them, l r and l f are the length of the rear overhang and the length of the front overhang, respectively, and δ f and δ r are the front wheel slip angle and the rear wheel slip angle, respectively;
单车模型的状态向量的表达式为:The expression of the state vector of the bicycle model is:
其中,xk、yk、ψk、r、b、N、δr,k和ωk分别为k时刻车辆的横坐标、纵坐标、航向角、车轮半径、车辆轴距、齿轮比、前轮偏角以及车轮转动角速度,车轮半径、车辆轴距和齿轮比保持不变,且将车辆的前轮偏角和车轮转动角速度为系统输入,dt为时间间隔;Among them, x k , y k , ψ k , r, b, N, δ r, k and ω k are the abscissa, ordinate, heading angle, wheel radius, vehicle wheelbase, gear ratio, front The wheel slip angle and wheel rotation angular velocity, wheel radius, vehicle wheelbase and gear ratio remain unchanged, and the front wheel slip angle and wheel rotation angular velocity of the vehicle are input to the system, and dt is the time interval;
单车模型的状态方程为:The state equation of the bicycle model is:
单车模型的观测值为:The observed values for the bicycle model are:
其中,xg,k和yg,k分别为在k时刻GPS观测到的车辆的横坐标和纵坐标;Among them, x g,k and y g,k are the abscissa and ordinate of the vehicle observed by GPS at time k, respectively;
单车模型的观测方程为:The observation equation of the bicycle model is:
所述的步骤7中,获取单车模型下的融合多源传感器信息的无人车定位算法的过程具体包括以下步骤:In the step 7, the process of obtaining the unmanned vehicle positioning algorithm fused with multi-source sensor information under the bicycle model specifically includes the following steps:
步骤701:基于公开数据集采用拓展卡尔曼算法、无迹卡尔曼算法与粒子滤波算法分别得到单车模型下的车辆的最佳估计值;Step 701: Based on the public data set, using the extended Kalman algorithm, the unscented Kalman algorithm and the particle filter algorithm to obtain the best estimated value of the vehicle under the bicycle model respectively;
步骤702:针对车辆的坐标和航向角进行误差分析,计算车辆的横坐标、纵坐标以及航向角对应的RMS误差,RMS误差的计算公式为:Step 702: Perform error analysis on the coordinates and heading angle of the vehicle, and calculate the RMS error corresponding to the abscissa, ordinate and heading angle of the vehicle. The calculation formula of the RMS error is:
errorrms=|X-Xest|error rms = |XX est |
其中,errorrms为经过滤波后的误差,X为真值,包括车辆的横坐标和纵坐标的真实值,Xest分别为经过滤波后车辆的横坐标、纵坐标以及航向角的最佳估计值;Among them, error rms is the filtered error, X is the true value, including the true value of the abscissa and ordinate of the vehicle, and Xest is the best estimated value of the abscissa, ordinate and heading angle of the vehicle after filtering, respectively ;
步骤703:将每个RMS误差进行均值处理,实现归一化;Step 703: Perform mean value processing on each RMS error to achieve normalization;
步骤704:将每一种滤波算法的三个归一化后的RMS误差相加形成分值,通过比较滤波算法的分值以评估三种滤波算法的性能与效果,进而选取分值最小的作为最优的定位算法;Step 704: Add the three normalized RMS errors of each filtering algorithm to form a score, evaluate the performance and effect of the three filtering algorithms by comparing the scores of the filtering algorithm, and then select the one with the smallest score as the score. optimal positioning algorithm;
步骤705:基于最优的定位算法得到单车模型下的融合定位结果。Step 705: Obtain the fusion positioning result under the bicycle model based on the optimal positioning algorithm.
所述的步骤701中,对于单车模型采用滤波算法进行滤波时,初始状态向量取真值,车轮半径r取0.425,车辆轴距b取0.8,齿轮比N取5,初始化过程噪声矩阵Q、观测噪声矩阵R以及初始误差协方差矩阵P0,其表达式分别为:In the step 701, when the bicycle model is filtered by the filtering algorithm, the initial state vector Take the true value, the wheel radius r is 0.425, the vehicle wheelbase b is 0.8, the gear ratio N is 5, the initialization process noise matrix Q, the observation noise matrix R and the initial error covariance matrix P 0 are expressed as:
与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:
1、本发明通过将GPS信息与IMU信息进行融合,利用两种传感器信息的互补性,获得更为精确的无人车定位结果,实现了提高在简单环境下的自动驾驶的定位精度,进而控制车辆安全行驶的技术效果;1. The present invention obtains more accurate positioning results of unmanned vehicles by fusing GPS information and IMU information, and utilizes the complementarity of the two sensor information, thereby improving the positioning accuracy of automatic driving in a simple environment, and then controlling The technical effect of the safe driving of the vehicle;
2、本发明通过针对系统的非线性程度选取最优的算法获取最优的定位结果,实现了增强无人车定位方法的适用性的技术效果。2. The present invention achieves the technical effect of enhancing the applicability of the unmanned vehicle positioning method by selecting the optimal algorithm according to the nonlinear degree of the system to obtain the optimal positioning result.
附图说明Description of drawings
图1为本发明提出所述的一种融合多源传感器信息的无人车定位方法所使用的线性模型图。FIG. 1 is a diagram of a linear model used in a method for locating an unmanned vehicle based on fusion of multi-source sensor information proposed by the present invention.
图2为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对线性模型的融合结果放大图。FIG. 2 is an enlarged view of a fusion result for a linear model implemented by an unmanned vehicle positioning method for fusion of multi-source sensor information proposed by the present invention.
图3为本发明提出所述的一种融合多源传感器信息的无人车定位方法所使用的运动学模型图。FIG. 3 is a diagram of a kinematics model used by an unmanned vehicle positioning method fused with multi-source sensor information proposed by the present invention.
图4为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对运动学模型的融合结果放大图。FIG. 4 is an enlarged view of a fusion result for a kinematic model implemented by an unmanned vehicle positioning method that fuses multi-source sensor information according to the present invention.
图5为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对运动学模型的三种不同滤波算法的滤波效果对比图。FIG. 5 is a comparison diagram of filtering effects of three different filtering algorithms for kinematic models implemented by an unmanned vehicle positioning method based on fusion of multi-source sensor information proposed by the present invention.
图6为本发明提出所述的一种融合多源传感器信息的无人车定位方法所使用的单车模型图。FIG. 6 is a diagram of a bicycle model used in the method for locating an unmanned vehicle by fusing multi-source sensor information according to the present invention.
图7为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对单车模型的融合结果图。FIG. 7 is a result diagram of a fusion result for a bicycle model implemented by an unmanned vehicle positioning method that fuses multi-source sensor information according to the present invention.
图8为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对单车模型的三种不同滤波算法的滤波效果对比放大图。FIG. 8 is an enlarged view of the comparison of filtering effects of three different filtering algorithms for a bicycle model implemented by an unmanned vehicle positioning method fused with multi-source sensor information according to the present invention.
图9为本发明的方法流程示意图。FIG. 9 is a schematic flowchart of the method of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施例对本发明进行详细说明。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
实施例Example
一种融合多源传感器信息的无人车定位方法,该方法包括以下步骤:An unmanned vehicle positioning method integrating multi-source sensor information, the method includes the following steps:
步骤1:获取待控制车辆的GPS位置信息和IMU信息,并将待控制车辆结合惯性导航系统与GPS信号获得的厘米级精度定位作为真值,以验证融合后的定位精度;Step 1: Obtain the GPS position information and IMU information of the vehicle to be controlled, and use the centimeter-level precision positioning obtained by the vehicle to be controlled in combination with the inertial navigation system and the GPS signal as the true value to verify the fusion positioning accuracy;
步骤2:根据GPS位置信息和IMU信息建立线性模型;Step 2: Establish a linear model according to GPS location information and IMU information;
步骤3:根据线性模型获得融合多源传感器信息的无人车定位方法,并得到线性模型下的融合定位结果;Step 3: Obtain the unmanned vehicle positioning method fused with multi-source sensor information according to the linear model, and obtain the fusion positioning result under the linear model;
步骤4:根据GPS位置信息和IMU信息建立非线性模型;Step 4: Build a nonlinear model according to GPS location information and IMU information;
步骤5:根据非线性模型获取运动学模型下的融合多源传感器信息的无人车定位方法,并得到非线性模型下的融合定位结果;Step 5: Obtain the positioning method of the unmanned vehicle fused with multi-source sensor information under the kinematic model according to the nonlinear model, and obtain the fusion positioning result under the nonlinear model;
步骤6:根据非线性模型对非线性模型进一步加深非线性并重新建模,以完成单车模型的建立;Step 6: According to the nonlinear model, the nonlinear model is further deepened and remodeled to complete the establishment of the bicycle model;
步骤7:根据单车模型获取单车模型下的融合多源传感器信息的无人车定位方法,并得到单车模型下的融合定位结果。Step 7: Obtain the unmanned vehicle positioning method fused with multi-source sensor information under the single-vehicle model according to the single-vehicle model, and obtain the fusion positioning result under the single-vehicle model.
在步骤1中,GPS信息通过安装在车顶的GNSS天线进行收集,角速度与加速度信息通过惯性导航单元进行收集。In step 1, GPS information is collected through a GNSS antenna installed on the roof, and angular velocity and acceleration information is collected through an inertial navigation unit.
在步骤2中,线性模型包括恒定速度模型与恒定加速度模型。In
在步骤3中,线性模型下的融合多源传感器信息的无人车定位方法具体为:In
将线性模型、IMU信息和GPS信息作为输入,根据卡尔曼滤波算法将GPS位置信息与IMU信息进行融合并得到更准确的无人车位置信息,与真值进行对比;Taking the linear model, IMU information and GPS information as input, the GPS location information and IMU information are fused according to the Kalman filter algorithm to obtain more accurate location information of the unmanned vehicle, which is compared with the true value;
在步骤4中,非线性模型为恒定加速度与角速度的运动学模型。In step 4, the nonlinear model is a kinematic model of constant acceleration and angular velocity.
在步骤5中,运动学模型下的融合多源传感器信息的无人车定位方法具体为:In
分别采用拓展卡尔曼滤波算法、无迹卡尔曼滤波算法和粒子滤波算法完成融合定位,并进行综合比较,选取最优的定位算法。The extended Kalman filter algorithm, the unscented Kalman filter algorithm and the particle filter algorithm are respectively used to complete the fusion positioning, and comprehensive comparison is made to select the optimal positioning algorithm.
在步骤6中,单车模型比运动学模型具有更强的非线性,且与实际生活中车辆模型更相似。In step 6, the bicycle model has stronger nonlinearity than the kinematic model and is more similar to the real-life vehicle model.
在步骤7中,单车模型下的融合多源传感器信息的无人车定位方法具体为:In step 7, the method for locating the unmanned vehicle by fusing multi-source sensor information under the bicycle model is as follows:
针对公开数据集采用拓展卡尔曼滤波算法、无迹卡尔曼滤波算法和粒子滤波算法分析单车模型的非线性程度对无人车融合定位算法结果精确度的影响,并获取最优的定位算法。According to the public data set, the extended Kalman filter algorithm, the unscented Kalman filter algorithm and the particle filter algorithm are used to analyze the influence of the nonlinear degree of the bicycle model on the accuracy of the fusion positioning algorithm of the unmanned vehicle, and obtain the optimal positioning algorithm.
本发明中,通过GPS与IMU对待控制车辆进行数据采集,针对系统进行线性系统建模,采用卡尔曼滤波算法融合得到更为精确的定位结果,针对系统进行非线性建模并进行实车测试,在非线性系统下采用拓展卡尔曼滤波、无迹卡尔曼滤波以及粒子滤波将无人车定位精度明显提高,进一步扩大模型的非线性,采用单车模型进一步进行实验仿真并挑选出最优的滤波算法,本发明将GPS信息与IMU信息进行融合,解决了在简单环境下如何在自动驾驶中提高定位精度以控制车辆行驶的技术问题,且能够在非线性程度不同的系统中挑选最优的解决方式,适用性强。In the present invention, data collection is performed on the vehicle to be controlled by GPS and IMU, linear system modeling is performed for the system, a Kalman filter algorithm is used for fusion to obtain a more accurate positioning result, nonlinear modeling is performed on the system and a real vehicle test is performed, Under the nonlinear system, the extended Kalman filter, unscented Kalman filter and particle filter are used to significantly improve the positioning accuracy of the unmanned vehicle, and the nonlinearity of the model is further expanded. The single vehicle model is used for further experimental simulation and the optimal filtering algorithm is selected. , the present invention fuses GPS information and IMU information, solves the technical problem of how to improve the positioning accuracy in automatic driving in a simple environment to control the driving of the vehicle, and can select the optimal solution in systems with different degrees of nonlinearity , the applicability is strong.
如图1~图8所示,一种融合多源传感器信息的无人车定位方法,无人车定位方法包括以下步骤:获取待控制车辆的GPS信息与IMU信息,并获取待控制车辆结合惯性导航系统与GPS信号所获得的厘米级精度定位作为真值,用以与融合定位结果进行对比,建立线性的恒定加速度模型,并采用卡尔曼滤波算法实现GPS位置信息与IMU位置信息融合,与真值进行对比并且计算滤波前后方差,建立恒定加速度与角加速度的运动学模型,采用拓展卡尔曼滤波算法、无迹卡尔曼滤波算法、粒子滤波算法实现GPS信息与IMU信息的融合,计算RMS误差,并判断三种算法效果的优劣,进一步加深非线性并建立单车模型,通过公开数据集完成GPS信息与IMU信息融合,并针对坐标值和航向角进行误差分析。As shown in Figures 1 to 8, an unmanned vehicle positioning method integrating multi-source sensor information, the unmanned vehicle positioning method includes the following steps: obtaining GPS information and IMU information of the vehicle to be controlled, and obtaining the combined inertia of the vehicle to be controlled. The centimeter-level precision positioning obtained by the navigation system and the GPS signal is used as the true value to compare with the fusion positioning results, establish a linear constant acceleration model, and use the Kalman filter algorithm to realize the fusion of GPS position information and IMU position information, which is consistent with the true value. The kinematic model of constant acceleration and angular acceleration is established, and the extended Kalman filter algorithm, the unscented Kalman filter algorithm, and the particle filter algorithm are used to realize the fusion of GPS information and IMU information, and the RMS error is calculated. And judge the pros and cons of the three algorithms, further deepen the nonlinearity and build a bicycle model, complete the fusion of GPS information and IMU information through public data sets, and conduct error analysis for coordinate values and heading angles.
该发明能够将GPS信息和IMU信息进行优劣互补,实现在简单环境下获取更为精确的无人车定位效果,通过对非线性系统下三种滤波算法效果的比较,得到能够实现最为准确的无人车定位效果,在非线性不强的环境中采用拓展卡尔曼滤波算法,当环境的非线性程度加强时,采用无迹卡尔曼滤波算法与粒子滤波算法得到的更优,因此本发明针对非线性程度不同的环境下都能够实现良好的融合定位效果,适用性强。The invention can complement the advantages and disadvantages of GPS information and IMU information, and achieve a more accurate positioning effect of unmanned vehicles in a simple environment. For the positioning effect of unmanned vehicles, the extended Kalman filter algorithm is used in an environment with weak nonlinearity. When the degree of nonlinearity of the environment is strengthened, the unscented Kalman filter algorithm and particle filter algorithm are used to obtain better results. Therefore, the present invention aims at It can achieve good fusion positioning effect in environments with different degrees of nonlinearity, and has strong applicability.
以上,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的工作人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto. Any person familiar with the technical field within the technical scope disclosed by the present invention can easily think of various equivalent modifications or Replacement, these modifications or replacements should all be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210109679.2A CN114440881B (en) | 2022-01-29 | 2022-01-29 | A positioning method for unmanned vehicles based on fusion of multi-source sensor information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210109679.2A CN114440881B (en) | 2022-01-29 | 2022-01-29 | A positioning method for unmanned vehicles based on fusion of multi-source sensor information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114440881A true CN114440881A (en) | 2022-05-06 |
CN114440881B CN114440881B (en) | 2023-05-30 |
Family
ID=81372107
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210109679.2A Active CN114440881B (en) | 2022-01-29 | 2022-01-29 | A positioning method for unmanned vehicles based on fusion of multi-source sensor information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114440881B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116222544A (en) * | 2023-05-09 | 2023-06-06 | 浙江大学湖州研究院 | Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm |
CN117804452A (en) * | 2023-12-07 | 2024-04-02 | 盐城中科高通量计算研究院有限公司 | Monte Carlo algorithm-based charging platform vehicle positioning method |
CN118583153A (en) * | 2024-05-08 | 2024-09-03 | 江苏理工学院 | An improved method for simultaneous localization and map creation for unmanned vehicles |
CN119338912A (en) * | 2024-12-18 | 2025-01-21 | 无锡学院 | Crane trolley positioning verification system and method based on visual detection |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2716597A1 (en) * | 2010-10-08 | 2012-04-08 | Canadian Space Agency | Apparatus and method for driftless attitude determination and reliable localization of vehicles |
JP2012173190A (en) * | 2011-02-23 | 2012-09-10 | Seiko Epson Corp | Positioning system and positioning method |
CN103439731A (en) * | 2013-08-29 | 2013-12-11 | 北京空间飞行器总体设计部 | GPS/INS integrated navigation method based on unscented Kalman filtering |
CN108931799A (en) * | 2018-07-18 | 2018-12-04 | 兰州交通大学 | Train combined positioning method and system based on the search of recurrence fast orthogonal |
CN110243358A (en) * | 2019-04-29 | 2019-09-17 | 武汉理工大学 | Indoor and outdoor positioning method and system for unmanned vehicles based on multi-source fusion |
CN110602647A (en) * | 2019-09-11 | 2019-12-20 | 江南大学 | Indoor fusion positioning method based on extended Kalman filtering and particle filtering |
CN110956665A (en) * | 2019-12-18 | 2020-04-03 | 中国科学院自动化研究所 | Vehicle turning track bidirectional calculation method, system and device |
CN113063429A (en) * | 2021-03-18 | 2021-07-02 | 苏州华米导航科技有限公司 | Self-adaptive vehicle-mounted combined navigation positioning method |
CN113252033A (en) * | 2021-06-29 | 2021-08-13 | 长沙海格北斗信息技术有限公司 | Positioning method, positioning system and robot based on multi-sensor fusion |
-
2022
- 2022-01-29 CN CN202210109679.2A patent/CN114440881B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2716597A1 (en) * | 2010-10-08 | 2012-04-08 | Canadian Space Agency | Apparatus and method for driftless attitude determination and reliable localization of vehicles |
JP2012173190A (en) * | 2011-02-23 | 2012-09-10 | Seiko Epson Corp | Positioning system and positioning method |
CN103439731A (en) * | 2013-08-29 | 2013-12-11 | 北京空间飞行器总体设计部 | GPS/INS integrated navigation method based on unscented Kalman filtering |
CN108931799A (en) * | 2018-07-18 | 2018-12-04 | 兰州交通大学 | Train combined positioning method and system based on the search of recurrence fast orthogonal |
CN110243358A (en) * | 2019-04-29 | 2019-09-17 | 武汉理工大学 | Indoor and outdoor positioning method and system for unmanned vehicles based on multi-source fusion |
CN110602647A (en) * | 2019-09-11 | 2019-12-20 | 江南大学 | Indoor fusion positioning method based on extended Kalman filtering and particle filtering |
CN110956665A (en) * | 2019-12-18 | 2020-04-03 | 中国科学院自动化研究所 | Vehicle turning track bidirectional calculation method, system and device |
CN113063429A (en) * | 2021-03-18 | 2021-07-02 | 苏州华米导航科技有限公司 | Self-adaptive vehicle-mounted combined navigation positioning method |
CN113252033A (en) * | 2021-06-29 | 2021-08-13 | 长沙海格北斗信息技术有限公司 | Positioning method, positioning system and robot based on multi-sensor fusion |
Non-Patent Citations (3)
Title |
---|
LI XU等: "Multi-sensor fusion methodology for enhanced land vehicle positioning", 《INFORMATION FUSION》 * |
VICENT GIRBÉS-JUAN等: "Asynchronous Sensor Fusion of GPS, IMU and CAN-Based Odometry for Heavy-Duty Vehicles", 《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY 》 * |
周群等: "基于融合算法的GPS/UWB/MARG协同定位系统研究", 《现代电子技术》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116222544A (en) * | 2023-05-09 | 2023-06-06 | 浙江大学湖州研究院 | Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm |
CN116222544B (en) * | 2023-05-09 | 2023-08-04 | 浙江大学湖州研究院 | Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm |
CN117804452A (en) * | 2023-12-07 | 2024-04-02 | 盐城中科高通量计算研究院有限公司 | Monte Carlo algorithm-based charging platform vehicle positioning method |
CN118583153A (en) * | 2024-05-08 | 2024-09-03 | 江苏理工学院 | An improved method for simultaneous localization and map creation for unmanned vehicles |
CN119338912A (en) * | 2024-12-18 | 2025-01-21 | 无锡学院 | Crane trolley positioning verification system and method based on visual detection |
Also Published As
Publication number | Publication date |
---|---|
CN114440881B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109991636B (en) | Map construction method and system based on GPS, IMU and binocular vision | |
CN114440881A (en) | Unmanned vehicle positioning method integrating multi-source sensor information | |
CN111307162B (en) | Multi-sensor fusion positioning method for automatic driving scene | |
CN110501024B (en) | Measurement error compensation method for vehicle-mounted INS/laser radar integrated navigation system | |
CN112505737B (en) | GNSS/INS integrated navigation method | |
CN101173858B (en) | A three-dimensional attitude determination and local positioning method for a lunar patrol probe | |
CN109000642A (en) | A kind of improved strong tracking volume Kalman filtering Combinated navigation method | |
CN108362288B (en) | Polarized light SLAM method based on unscented Kalman filtering | |
CN107728182B (en) | Flexible multi-baseline measurement method and device based on camera assistance | |
CN111595592A (en) | Performance evaluation method of adaptive cruise control system | |
CN111006675B (en) | Self-calibration method of vehicle-mounted laser inertial navigation system based on high-precision gravity model | |
CN110057356B (en) | Method and device for locating vehicle in tunnel | |
CN107600073B (en) | A system and method for vehicle centroid sideslip angle estimation based on multi-source information fusion | |
CN108627152B (en) | Navigation method of micro unmanned aerial vehicle based on multi-sensor data fusion | |
CN113063429A (en) | Self-adaptive vehicle-mounted combined navigation positioning method | |
CN112346104A (en) | A UAV Information Fusion Positioning Method | |
CN113008229B (en) | Distributed autonomous integrated navigation method based on low-cost vehicle-mounted sensor | |
CN101900573A (en) | A Method for Realizing Motion Alignment of Land-Used Inertial Navigation System | |
CN113503872B (en) | Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU | |
CN115855048A (en) | Multi-sensor fusion pose estimation method | |
CN103123487B (en) | A kind of spacecraft attitude determination method | |
Park | Optimal vehicle position estimation using adaptive unscented Kalman filter based on sensor fusion | |
CN111912427B (en) | Method and system for aligning motion base of strapdown inertial navigation assisted by Doppler radar | |
CN114932909B (en) | A Slope Estimation Method Based on Complementary Filtering to Implement Acceleration Correction | |
CN113029173A (en) | Vehicle navigation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |