CN114440881A - Unmanned vehicle positioning method integrating multi-source sensor information - Google Patents

Unmanned vehicle positioning method integrating multi-source sensor information Download PDF

Info

Publication number
CN114440881A
CN114440881A CN202210109679.2A CN202210109679A CN114440881A CN 114440881 A CN114440881 A CN 114440881A CN 202210109679 A CN202210109679 A CN 202210109679A CN 114440881 A CN114440881 A CN 114440881A
Authority
CN
China
Prior art keywords
vehicle
model
matrix
algorithm
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210109679.2A
Other languages
Chinese (zh)
Other versions
CN114440881B (en
Inventor
张卫东
刘馨泽
王咸鹏
张永辉
黄梦醒
沈重
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202210109679.2A priority Critical patent/CN114440881B/en
Publication of CN114440881A publication Critical patent/CN114440881A/en
Application granted granted Critical
Publication of CN114440881B publication Critical patent/CN114440881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The invention relates to an unmanned vehicle positioning method fusing multi-source sensor information, which comprises the following steps: step 1: acquiring GPS position information and IMU information of a vehicle to be controlled, and taking centimeter-level precision positioning obtained by combining the vehicle to be controlled with an inertial navigation system and a GPS signal as a true value; step 2: establishing a linear model; and step 3: acquiring an unmanned vehicle positioning algorithm for fusing multi-source sensor information under a linear model, and acquiring a fusion positioning result under the linear model; and 4, step 4: establishing a nonlinear model; and 5: acquiring an unmanned vehicle positioning algorithm for fusing multi-source sensor information under a kinematic model, and acquiring a fusion positioning result under a nonlinear model; step 6: establishing a bicycle model; and 7: compared with the prior art, the unmanned vehicle positioning method has the advantages of improving unmanned vehicle positioning accuracy, being high in applicability and the like.

Description

一种融合多源传感器信息的无人车定位方法A method of unmanned vehicle localization based on fusion of multi-source sensor information

技术领域technical field

本发明涉及自动驾驶技术领域,尤其涉及一种融合多源传感器信息的无人车定位方法。The invention relates to the technical field of automatic driving, in particular to an unmanned vehicle positioning method integrating multi-source sensor information.

背景技术Background technique

近年随着中国经济增长,汽车已经不仅仅只是便捷的交通工具,汽车的智能网联化已经成为人们的另一种需求,因此无人车由于其便携性和可拓展性受到广大科研工作者和产业界的关注。其中无人车定位是研究无人车领域的基础模块,使用不同的感知传感器需要采用不同的研究方法。目前市面上单纯地使用GPS完成定位工作的方案,其特点是全天候,覆盖区域大,然而面对卫星信号受到干扰时,会产生很大误差,另外单纯使用IMU完成定位工作地方案由于此技术为相对定位技术,其定位误差会随着时间地累积越来越大。In recent years, with the growth of China's economy, cars have become more than just convenient means of transportation, and the intelligent networking of cars has become another demand of people. industry attention. Among them, the positioning of unmanned vehicles is the basic module of the research on the field of unmanned vehicles. The use of different sensing sensors requires different research methods. At present, the solution that simply uses GPS to complete the positioning work is characterized by all-weather and large coverage area. However, when the satellite signal is interfered, a large error will occur. In addition, the solution of simply using the IMU to complete the positioning work is because this technology is Relative positioning technology, its positioning error will accumulate more and more over time.

发明内容SUMMARY OF THE INVENTION

本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种融合多源传感器信息的无人车定位方法。The purpose of the present invention is to provide an unmanned vehicle positioning method integrating multi-source sensor information in order to overcome the above-mentioned defects of the prior art.

本发明的目的可以通过以下技术方案来实现:The object of the present invention can be realized through the following technical solutions:

一种融合多源传感器信息的无人车定位方法,该方法包括以下步骤:An unmanned vehicle positioning method integrating multi-source sensor information, the method includes the following steps:

步骤1:获取待控制车辆的GPS位置信息和IMU信息,并将待控制车辆结合惯性导航系统与GPS信号获得的厘米级精度定位作为真值,以验证融合后的定位精度;Step 1: Obtain the GPS position information and IMU information of the vehicle to be controlled, and use the centimeter-level precision positioning obtained by the vehicle to be controlled in combination with the inertial navigation system and the GPS signal as the true value to verify the fusion positioning accuracy;

步骤2:根据GPS位置信息和IMU信息建立线性模型;Step 2: Establish a linear model according to GPS location information and IMU information;

步骤3:获取线性模型下的融合多源传感器信息的无人车定位算法,并得到线性模型下的融合定位结果;Step 3: Obtain the unmanned vehicle positioning algorithm fused with multi-source sensor information under the linear model, and obtain the fusion positioning result under the linear model;

步骤4:根据GPS位置信息和IMU信息建立非线性模型;Step 4: Build a nonlinear model according to GPS location information and IMU information;

步骤5:获取非线性模型下的融合多源传感器信息的无人车定位算法,并得到非线性模型下的融合定位结果;Step 5: Obtain the unmanned vehicle positioning algorithm fused with multi-source sensor information under the nonlinear model, and obtain the fusion positioning result under the nonlinear model;

步骤6:对非线性模型进一步加深非线性并重新建模得到单车模型;Step 6: Further deepen the nonlinearity of the nonlinear model and re-model to obtain a bicycle model;

步骤7:获取单车模型下的融合多源传感器信息的无人车定位算法,并得到单车模型下的融合定位结果。Step 7: Obtain the unmanned vehicle positioning algorithm fused with multi-source sensor information under the bicycle model, and obtain the fusion positioning result under the bicycle model.

所述的步骤1中,通过安装在车顶的GNSS天线收集GPS信息,通过惯性导航单元收集IMU信息,所述的GPS信息包括车辆的经纬度信息,且车辆的经纬度信可转换为车辆的横坐标和纵坐标,所述的IMU信息包括车辆的三轴角速度和加速度。In the described step 1, the GPS information is collected through the GNSS antenna installed on the roof, and the IMU information is collected through the inertial navigation unit. The GPS information includes the latitude and longitude information of the vehicle, and the latitude and longitude information of the vehicle can be converted into the abscissa of the vehicle. and ordinate, the IMU information includes the triaxial angular velocity and acceleration of the vehicle.

所述的步骤2中,线性模型包括恒定速度模型与恒定加速度模型,其中,恒定速度模型的状态向量的表达式为:In the described step 2, the linear model includes a constant velocity model and a constant acceleration model, wherein the expression of the state vector of the constant velocity model is:

Figure BDA0003494729110000021
Figure BDA0003494729110000021

其中,x、y、θ和v分别为t时刻车辆位置的横坐标、纵坐标、车辆航向角和速度;Among them, x, y, θ and v are the abscissa, ordinate, vehicle heading angle and speed of the vehicle position at time t, respectively;

恒定速度模型的状态方程和观测方程的表达式分别为:The expressions of the state equation and observation equation of the constant velocity model are:

Figure BDA0003494729110000022
Figure BDA0003494729110000022

Figure BDA0003494729110000023
Figure BDA0003494729110000023

其中,Δt为时间间隔,

Figure BDA0003494729110000024
为经过Δt时间后的状态向量,
Figure BDA0003494729110000025
为由GPS得到的观测值,xgps为由GPS得到的车辆位置的横坐标,ygps为由GPS得到的车辆位置的纵坐标;where Δt is the time interval,
Figure BDA0003494729110000024
is the state vector after Δt time,
Figure BDA0003494729110000025
is the observation value obtained by GPS, x gps is the abscissa of the vehicle position obtained by GPS, and y gps is the ordinate of the vehicle position obtained by GPS;

恒定加速度模型的状态向量的表达式为:The expression for the state vector of the constant acceleration model is:

Figure BDA0003494729110000026
Figure BDA0003494729110000026

其中,s和v分别为t时刻车辆的位移和速度;Among them, s and v are the displacement and velocity of the vehicle at time t, respectively;

恒定加速度模型的状态方程和观测方程的表达式分别为:The expressions of the state equation and observation equation of the constant acceleration model are:

Figure BDA0003494729110000027
Figure BDA0003494729110000027

Figure BDA0003494729110000028
Figure BDA0003494729110000028

其中,a为车辆的加速度,

Figure BDA0003494729110000029
为经过Δt时间后的状态向量,s(t)为t时刻车辆的位移,v(t)为t时刻车辆的速度,
Figure BDA0003494729110000031
为观测值,即GPS测量得到的位移;where a is the acceleration of the vehicle,
Figure BDA0003494729110000029
is the state vector after Δt time, s(t) is the displacement of the vehicle at time t, v(t) is the speed of the vehicle at time t,
Figure BDA0003494729110000031
is the observation value, that is, the displacement obtained by GPS measurement;

恒定加速度模型将加速度计测量的数值作为系统输入:The constant acceleration model takes the values measured by the accelerometer as system input:

u(k)=a(k)u(k)=a(k)

其中,u(k)为系统输入,a(k)为加速度计测量的数值。Among them, u(k) is the system input, and a(k) is the value measured by the accelerometer.

所述的步骤3中,线性模型下的融合多源传感器信息的无人车定位方法具体为:In the step 3, the method for locating the unmanned vehicle by fusing multi-source sensor information under the linear model is as follows:

步骤301:根据卡尔曼滤波算法得到更为准确的无人车位置信息,卡尔曼滤波算法具体过程分为状态预测、误差协方差预测、卡尔曼增益更新、状态更新与误差协方差更新五个环节:Step 301: Obtain more accurate location information of the unmanned vehicle according to the Kalman filter algorithm. The specific process of the Kalman filter algorithm is divided into five links: state prediction, error covariance prediction, Kalman gain update, state update and error covariance update :

Figure BDA0003494729110000032
Figure BDA0003494729110000032

Figure BDA0003494729110000033
Figure BDA0003494729110000033

Figure BDA0003494729110000034
Figure BDA0003494729110000034

Figure BDA0003494729110000035
Figure BDA0003494729110000035

Figure BDA0003494729110000036
Figure BDA0003494729110000036

其中,

Figure BDA0003494729110000037
表示k时刻状态向量的预测值,
Figure BDA0003494729110000038
表示k时刻状态向量的最优估计值,
Figure BDA0003494729110000039
表示k时刻误差协方差的预测值,Pk表示k时刻误差协方差的最优估计值,Kk表示k时刻的卡尔曼增益,Q为过程噪声协方差矩阵,R为观测噪声协方差矩阵,uk-1表示k-1时刻的系统输入,A表示系统的状态转移矩阵,H表示系统的观测矩阵,I表示单位矩阵,B为控制输入矩阵;in,
Figure BDA0003494729110000037
represents the predicted value of the state vector at time k,
Figure BDA0003494729110000038
represents the optimal estimate of the state vector at time k,
Figure BDA0003494729110000039
is the predicted value of the error covariance at time k, P k is the optimal estimate of the error covariance at time k, K k is the Kalman gain at time k, Q is the process noise covariance matrix, R is the observation noise covariance matrix, u k-1 represents the system input at time k-1, A represents the state transition matrix of the system, H represents the observation matrix of the system, I represents the identity matrix, and B represents the control input matrix;

对于恒定速度模型,根据恒定速度模型的状态方程和观测方程得到观测矩阵H与状态转移矩阵A的表达式分别为:For the constant speed model, the expressions of the observation matrix H and the state transition matrix A are obtained according to the state equation and the observation equation of the constant speed model:

Figure BDA00034947291100000310
Figure BDA00034947291100000310

Figure BDA00034947291100000311
Figure BDA00034947291100000311

其中,Δt为时间间隔,θ为车辆的航向角;Among them, Δt is the time interval, and θ is the heading angle of the vehicle;

将协方差矩阵P初始化为单位矩阵,将过程噪声矩阵Q与观测噪声矩阵R初始化:Initialize the covariance matrix P to the identity matrix, and initialize the process noise matrix Q and the observation noise matrix R:

Figure BDA00034947291100000312
Figure BDA00034947291100000312

Figure BDA00034947291100000313
Figure BDA00034947291100000313

对于恒定加速度模型,根据恒定加速度模型的状态方程、观测值和系统输入得到状态转移矩阵A、观测矩阵H和控制输入矩阵B的表达式分别为:For the constant acceleration model, the expressions of state transition matrix A, observation matrix H and control input matrix B are obtained according to the state equation, observation value and system input of the constant acceleration model:

Figure BDA0003494729110000041
Figure BDA0003494729110000041

H=[1 0]H=[1 0]

Figure BDA0003494729110000042
Figure BDA0003494729110000042

其中,Δt为时间间隔;Among them, Δt is the time interval;

将协方差矩阵P初始化为单位矩阵,过程噪声矩阵Q与观测噪声矩阵R初始化分别为:The covariance matrix P is initialized as the identity matrix, and the process noise matrix Q and the observation noise matrix R are initialized as:

Figure BDA0003494729110000043
Figure BDA0003494729110000043

R=1;r = 1;

步骤302:将得到的最优估计值与真值进行对比,并计算滤波前后的方差。Step 302: Compare the obtained optimal estimated value with the true value, and calculate the variance before and after filtering.

所述的步骤4中,非线性模型为基于恒定加速度与角速度的运动学模型,运动学模型的状态向量的表达式为:In the described step 4, the nonlinear model is a kinematic model based on constant acceleration and angular velocity, and the expression of the state vector of the kinematic model is:

Figure BDA0003494729110000044
Figure BDA0003494729110000044

其中,xk、yk、ψk、ωk、vk和ak分别为k时刻车辆的横坐标、纵坐标、航向角、角速度、速度以及加速度;where x k , y k , ψ k , ω k , v k and ak are the abscissa, ordinate, heading angle, angular velocity, velocity and acceleration of the vehicle at time k , respectively;

运动学模型的状态方程和观测方程的表达式分别为:The expressions of the state equation and observation equation of the kinematic model are:

Figure BDA0003494729110000045
Figure BDA0003494729110000045

Figure BDA0003494729110000046
Figure BDA0003494729110000046

其中,

Figure BDA0003494729110000047
为k-1时刻车辆的状态向量,vk-1为k-1时刻车辆的速度,dt为时间间隔,
Figure BDA0003494729110000048
为观测值,xg,k、yg,k、aa,k和ωg,k分别为k时刻GPS观测得到的车辆横坐标、纵坐标、加速度计获得加速度以及陀螺仪测得的角速度.in,
Figure BDA0003494729110000047
is the state vector of the vehicle at time k-1, v k-1 is the speed of the vehicle at time k-1, dt is the time interval,
Figure BDA0003494729110000048
are the observed values, x g,k , y g,k , a a,k and ω g,k are the abscissa and ordinate of the vehicle, the acceleration obtained by the accelerometer, and the angular velocity measured by the gyroscope at time k, respectively.

所述的步骤5中,获取运动学模型下的融合多源传感器信息的无人车定位方法的过程具体包括以下步骤:In the step 5, the process of obtaining the method for locating the unmanned vehicle by fusing multi-source sensor information under the kinematic model specifically includes the following steps:

步骤501:分别采用拓展卡尔曼算法、无迹卡尔曼算法与粒子滤波算法进行融合定位;Step 501: respectively adopt the extended Kalman algorithm, the unscented Kalman algorithm and the particle filter algorithm to perform fusion positioning;

步骤502:将三种滤波算法进行综合比较,计算RMS误差,以选取最优的定位算法,RMS误差的计算公式为:Step 502: Comprehensively compare the three filtering algorithms, and calculate the RMS error to select the optimal positioning algorithm. The calculation formula of the RMS error is:

errorrms=|X-Xest|error rms = |XX est |

其中,errorrms为经过滤波后的误差,X为真值,包括车辆的横坐标和纵坐标的真实值,Xest分别为经过滤波后车辆横坐标与纵坐标的最佳估计值;Among them, error rms is the error after filtering, X is the true value, including the true value of the abscissa and ordinate of the vehicle, and Xest is the best estimated value of the abscissa and ordinate of the vehicle after filtering;

步骤503:选取RMS误差最小的对应的滤波算法,并基于该滤波算法获取非线性模型下的融合定位结果。Step 503: Select the corresponding filtering algorithm with the smallest RMS error, and obtain the fusion positioning result under the nonlinear model based on the filtering algorithm.

所述的步骤501中,拓展卡尔曼滤波算法具体的公式包括:In the described step 501, the specific formula of the extended Kalman filter algorithm includes:

Figure BDA0003494729110000051
Figure BDA0003494729110000051

Figure BDA0003494729110000052
Figure BDA0003494729110000052

其中,Fk-1与Hk分别为雅各比矩阵,uk为k时刻的系统输入,xk-1为k-1时刻的状态向量;Among them, F k-1 and H k are Jacobian matrices respectively, uk is the system input at time k , and x k-1 is the state vector at time k-1;

雅各比矩阵Fk-1和Hk的表达式分别为:The expressions of the Jacobian matrices F k-1 and H k are:

Figure BDA0003494729110000053
Figure BDA0003494729110000053

Figure BDA0003494729110000054
Figure BDA0003494729110000054

其中,ψk-1为k-1时刻车辆的航向角,vk-1为k-1时刻车辆的速度,dt为时间间隔;Among them, ψ k-1 is the heading angle of the vehicle at time k-1, v k-1 is the speed of the vehicle at time k-1, and dt is the time interval;

无迹卡尔曼滤波算法的具体滤波过程为:The specific filtering process of the unscented Kalman filter algorithm is as follows:

步骤1a:初始化状态向量x0、状态估计误差协方差P0、过程噪声矩阵Q以及观测噪声矩阵R,对于运动学模型,状态向量的初值

Figure BDA0003494729110000055
为真值,过程噪声矩阵Q、观测噪声矩阵R以及初始误差协方差矩阵P0初始化的表达式为:Step 1a: Initialize the state vector x 0 , the state estimation error covariance P 0 , the process noise matrix Q and the observation noise matrix R, for the kinematic model, the initial value of the state vector
Figure BDA0003494729110000055
is the true value, the initialized expressions of the process noise matrix Q, the observation noise matrix R and the initial error covariance matrix P 0 are:

Figure BDA0003494729110000061
Figure BDA0003494729110000061

Figure BDA0003494729110000062
Figure BDA0003494729110000062

Figure BDA0003494729110000063
Figure BDA0003494729110000063

步骤2a:根据上一时刻的状态向量xk和误差协方差矩阵Pk选择当前时刻采样点,并分配权重;Step 2a: According to the state vector x k at the previous moment and the error covariance matrix P k , the sampling point at the current moment is selected, and the weight is assigned;

步骤3a:选择2n+1个采样点,通过系统的状态方程计算当前时刻所有采样点的均值和协方差,完成时间更新;Step 3a: Select 2n+1 sampling points, calculate the mean and covariance of all sampling points at the current moment through the state equation of the system, and complete the time update;

步骤4a:通过系统的观测方程对采样点进行量测更新;Step 4a: measure and update the sampling points through the observation equation of the system;

步骤5a:通过预测与量测更新计算出卡尔曼增益,完成滤波更新;Step 5a: Calculate the Kalman gain through prediction and measurement update, and complete the filter update;

粒子滤波算法采用蒙特卡洛法解决非线性滤波问题,粒子滤波算法的滤波过程为:The particle filter algorithm uses the Monte Carlo method to solve the nonlinear filtering problem. The filtering process of the particle filter algorithm is as follows:

步骤1b:初始化N个粒子

Figure BDA0003494729110000064
且将粒子的权值
Figure BDA0003494729110000065
统一设置为1/N;Step 1b: Initialize N particles
Figure BDA0003494729110000064
and the particle weights
Figure BDA0003494729110000065
Uniformly set to 1/N;

步骤2b:更新粒子

Figure BDA0003494729110000066
计算每个粒子的似然概率
Figure BDA0003494729110000067
将每个粒子的权值
Figure BDA0003494729110000068
更新为
Figure BDA0003494729110000069
并进行归一化处理,粒子的权值更新公式为:Step 2b: Update the particles
Figure BDA0003494729110000066
Calculate the likelihood probability for each particle
Figure BDA0003494729110000067
the weight of each particle
Figure BDA0003494729110000068
update to
Figure BDA0003494729110000069
And normalized, the particle weight update formula is:

Figure BDA00034947291100000610
Figure BDA00034947291100000610

其中,

Figure BDA00034947291100000611
为粒子的权值,
Figure BDA00034947291100000612
为粒子的似然概率,zk为k时刻系统的观测值,
Figure BDA00034947291100000613
为k时刻第i个粒子观测方程的输出值,Rk为k时刻的观测噪声协方差矩阵;in,
Figure BDA00034947291100000611
is the weight of the particle,
Figure BDA00034947291100000612
is the likelihood probability of the particle, z k is the observed value of the system at time k,
Figure BDA00034947291100000613
is the output value of the ith particle observation equation at time k, and R k is the observation noise covariance matrix at time k;

步骤3b:对

Figure BDA0003494729110000071
进行重采样,计算有效粒子数Neff,以此得到新的粒子群
Figure BDA0003494729110000072
且粒子的权值
Figure BDA0003494729110000073
为1/N;Step 3b: Right
Figure BDA0003494729110000071
Perform resampling and calculate the effective number of particles N eff to obtain a new particle swarm
Figure BDA0003494729110000072
and the weight of the particle
Figure BDA0003494729110000073
is 1/N;

步骤4b:采用经过重采样后的粒子的权值和状态计算当前时刻的滤波状态估计与方差估计。Step 4b: Calculate the filter state estimation and variance estimation at the current moment by using the weights and states of the resampled particles.

所述的步骤6中,单车模型中的速度与角速度之间满足的关系为:In the step 6, the relationship between the speed and the angular velocity in the bicycle model is:

Figure BDA0003494729110000074
Figure BDA0003494729110000074

Figure BDA0003494729110000075
Figure BDA0003494729110000075

Figure BDA0003494729110000076
Figure BDA0003494729110000076

其中,β为滑移角,

Figure BDA0003494729110000077
为水平方向速度,
Figure BDA0003494729110000078
为垂直方向速度,
Figure BDA0003494729110000079
为航向角的角速度,v为质心车速,ψ为航向角大小,lr与lf分别为后悬长度与前悬长度,δf与δr分别为前轮偏角与后轮偏角;where β is the slip angle,
Figure BDA0003494729110000077
is the horizontal velocity,
Figure BDA0003494729110000078
is the vertical velocity,
Figure BDA0003494729110000079
is the angular velocity of the heading angle, v is the vehicle speed of the center of mass, ψ is the heading angle, l r and l f are the length of the rear overhang and the length of the front overhang, respectively, δ f and δ r are the front wheel declination angle and the rear wheel declination angle respectively;

滑移角β的计算公式为:The formula for calculating the slip angle β is:

Figure BDA00034947291100000710
Figure BDA00034947291100000710

其中,lr与lf分别为后悬长度与前悬长度,δf与δr分别为前轮偏角与后轮偏角;Among them, l r and l f are the length of the rear overhang and the length of the front overhang, respectively, and δ f and δ r are the front wheel slip angle and the rear wheel slip angle, respectively;

单车模型的状态向量的表达式为:The expression of the state vector of the bicycle model is:

Figure BDA00034947291100000711
Figure BDA00034947291100000711

其中,xk、yk、ψk、r、b、N、δr,k和ωk分别为k时刻车辆的横坐标、纵坐标、航向角、车轮半径、车辆轴距、齿轮比、前轮偏角以及车轮转动角速度,车轮半径、车辆轴距和齿轮比保持不变,且将车辆的前轮偏角和车轮转动角速度为系统输入,dt为时间间隔;Among them, x k , y k , ψ k , r, b, N, δ r, k and ω k are the abscissa, ordinate, heading angle, wheel radius, vehicle wheelbase, gear ratio, front The wheel slip angle and wheel rotation angular velocity, wheel radius, vehicle wheelbase and gear ratio remain unchanged, and the front wheel slip angle and wheel rotation angular velocity of the vehicle are input to the system, and dt is the time interval;

单车模型的状态方程为:The state equation of the bicycle model is:

Figure BDA00034947291100000712
Figure BDA00034947291100000712

单车模型的观测值为:The observed values for the bicycle model are:

Figure BDA0003494729110000081
Figure BDA0003494729110000081

其中,xg,k和yg,k分别为在k时刻GPS观测到的车辆的横坐标和纵坐标;Among them, x g,k and y g,k are the abscissa and ordinate of the vehicle observed by GPS at time k, respectively;

单车模型的观测方程为:The observation equation of the bicycle model is:

Figure BDA0003494729110000082
Figure BDA0003494729110000082

所述的步骤7中,获取单车模型下的融合多源传感器信息的无人车定位算法的过程具体包括以下步骤:In the step 7, the process of obtaining the unmanned vehicle positioning algorithm fused with multi-source sensor information under the bicycle model specifically includes the following steps:

步骤701:基于公开数据集采用拓展卡尔曼算法、无迹卡尔曼算法与粒子滤波算法分别得到单车模型下的车辆的最佳估计值;Step 701: Based on the public data set, using the extended Kalman algorithm, the unscented Kalman algorithm and the particle filter algorithm to obtain the best estimated value of the vehicle under the bicycle model respectively;

步骤702:针对车辆的坐标和航向角进行误差分析,计算车辆的横坐标、纵坐标以及航向角对应的RMS误差,RMS误差的计算公式为:Step 702: Perform error analysis on the coordinates and heading angle of the vehicle, and calculate the RMS error corresponding to the abscissa, ordinate and heading angle of the vehicle. The calculation formula of the RMS error is:

errorrms=|X-Xest|error rms = |XX est |

其中,errorrms为经过滤波后的误差,X为真值,包括车辆的横坐标和纵坐标的真实值,Xest分别为经过滤波后车辆的横坐标、纵坐标以及航向角的最佳估计值;Among them, error rms is the filtered error, X is the true value, including the true value of the abscissa and ordinate of the vehicle, and Xest is the best estimated value of the abscissa, ordinate and heading angle of the vehicle after filtering, respectively ;

步骤703:将每个RMS误差进行均值处理,实现归一化;Step 703: Perform mean value processing on each RMS error to achieve normalization;

步骤704:将每一种滤波算法的三个归一化后的RMS误差相加形成分值,通过比较滤波算法的分值以评估三种滤波算法的性能与效果,进而选取分值最小的作为最优的定位算法;Step 704: Add the three normalized RMS errors of each filtering algorithm to form a score, evaluate the performance and effect of the three filtering algorithms by comparing the scores of the filtering algorithm, and then select the one with the smallest score as the score. optimal positioning algorithm;

步骤705:基于最优的定位算法得到单车模型下的融合定位结果。Step 705: Obtain the fusion positioning result under the bicycle model based on the optimal positioning algorithm.

所述的步骤701中,对于单车模型采用滤波算法进行滤波时,初始状态向量

Figure BDA0003494729110000083
取真值,车轮半径r取0.425,车辆轴距b取0.8,齿轮比N取5,初始化过程噪声矩阵Q、观测噪声矩阵R以及初始误差协方差矩阵P0,其表达式分别为:In the step 701, when the bicycle model is filtered by the filtering algorithm, the initial state vector
Figure BDA0003494729110000083
Take the true value, the wheel radius r is 0.425, the vehicle wheelbase b is 0.8, the gear ratio N is 5, the initialization process noise matrix Q, the observation noise matrix R and the initial error covariance matrix P 0 are expressed as:

Figure BDA0003494729110000084
Figure BDA0003494729110000084

Figure BDA0003494729110000091
Figure BDA0003494729110000091

Figure BDA0003494729110000092
Figure BDA0003494729110000092

与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

1、本发明通过将GPS信息与IMU信息进行融合,利用两种传感器信息的互补性,获得更为精确的无人车定位结果,实现了提高在简单环境下的自动驾驶的定位精度,进而控制车辆安全行驶的技术效果;1. The present invention obtains more accurate positioning results of unmanned vehicles by fusing GPS information and IMU information, and utilizes the complementarity of the two sensor information, thereby improving the positioning accuracy of automatic driving in a simple environment, and then controlling The technical effect of the safe driving of the vehicle;

2、本发明通过针对系统的非线性程度选取最优的算法获取最优的定位结果,实现了增强无人车定位方法的适用性的技术效果。2. The present invention achieves the technical effect of enhancing the applicability of the unmanned vehicle positioning method by selecting the optimal algorithm according to the nonlinear degree of the system to obtain the optimal positioning result.

附图说明Description of drawings

图1为本发明提出所述的一种融合多源传感器信息的无人车定位方法所使用的线性模型图。FIG. 1 is a diagram of a linear model used in a method for locating an unmanned vehicle based on fusion of multi-source sensor information proposed by the present invention.

图2为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对线性模型的融合结果放大图。FIG. 2 is an enlarged view of a fusion result for a linear model implemented by an unmanned vehicle positioning method for fusion of multi-source sensor information proposed by the present invention.

图3为本发明提出所述的一种融合多源传感器信息的无人车定位方法所使用的运动学模型图。FIG. 3 is a diagram of a kinematics model used by an unmanned vehicle positioning method fused with multi-source sensor information proposed by the present invention.

图4为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对运动学模型的融合结果放大图。FIG. 4 is an enlarged view of a fusion result for a kinematic model implemented by an unmanned vehicle positioning method that fuses multi-source sensor information according to the present invention.

图5为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对运动学模型的三种不同滤波算法的滤波效果对比图。FIG. 5 is a comparison diagram of filtering effects of three different filtering algorithms for kinematic models implemented by an unmanned vehicle positioning method based on fusion of multi-source sensor information proposed by the present invention.

图6为本发明提出所述的一种融合多源传感器信息的无人车定位方法所使用的单车模型图。FIG. 6 is a diagram of a bicycle model used in the method for locating an unmanned vehicle by fusing multi-source sensor information according to the present invention.

图7为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对单车模型的融合结果图。FIG. 7 is a result diagram of a fusion result for a bicycle model implemented by an unmanned vehicle positioning method that fuses multi-source sensor information according to the present invention.

图8为本发明提出所述的一种融合多源传感器信息的无人车定位方法实现的针对单车模型的三种不同滤波算法的滤波效果对比放大图。FIG. 8 is an enlarged view of the comparison of filtering effects of three different filtering algorithms for a bicycle model implemented by an unmanned vehicle positioning method fused with multi-source sensor information according to the present invention.

图9为本发明的方法流程示意图。FIG. 9 is a schematic flowchart of the method of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施例对本发明进行详细说明。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.

实施例Example

一种融合多源传感器信息的无人车定位方法,该方法包括以下步骤:An unmanned vehicle positioning method integrating multi-source sensor information, the method includes the following steps:

步骤1:获取待控制车辆的GPS位置信息和IMU信息,并将待控制车辆结合惯性导航系统与GPS信号获得的厘米级精度定位作为真值,以验证融合后的定位精度;Step 1: Obtain the GPS position information and IMU information of the vehicle to be controlled, and use the centimeter-level precision positioning obtained by the vehicle to be controlled in combination with the inertial navigation system and the GPS signal as the true value to verify the fusion positioning accuracy;

步骤2:根据GPS位置信息和IMU信息建立线性模型;Step 2: Establish a linear model according to GPS location information and IMU information;

步骤3:根据线性模型获得融合多源传感器信息的无人车定位方法,并得到线性模型下的融合定位结果;Step 3: Obtain the unmanned vehicle positioning method fused with multi-source sensor information according to the linear model, and obtain the fusion positioning result under the linear model;

步骤4:根据GPS位置信息和IMU信息建立非线性模型;Step 4: Build a nonlinear model according to GPS location information and IMU information;

步骤5:根据非线性模型获取运动学模型下的融合多源传感器信息的无人车定位方法,并得到非线性模型下的融合定位结果;Step 5: Obtain the positioning method of the unmanned vehicle fused with multi-source sensor information under the kinematic model according to the nonlinear model, and obtain the fusion positioning result under the nonlinear model;

步骤6:根据非线性模型对非线性模型进一步加深非线性并重新建模,以完成单车模型的建立;Step 6: According to the nonlinear model, the nonlinear model is further deepened and remodeled to complete the establishment of the bicycle model;

步骤7:根据单车模型获取单车模型下的融合多源传感器信息的无人车定位方法,并得到单车模型下的融合定位结果。Step 7: Obtain the unmanned vehicle positioning method fused with multi-source sensor information under the single-vehicle model according to the single-vehicle model, and obtain the fusion positioning result under the single-vehicle model.

在步骤1中,GPS信息通过安装在车顶的GNSS天线进行收集,角速度与加速度信息通过惯性导航单元进行收集。In step 1, GPS information is collected through a GNSS antenna installed on the roof, and angular velocity and acceleration information is collected through an inertial navigation unit.

在步骤2中,线性模型包括恒定速度模型与恒定加速度模型。In step 2, the linear model includes a constant velocity model and a constant acceleration model.

在步骤3中,线性模型下的融合多源传感器信息的无人车定位方法具体为:In step 3, the unmanned vehicle localization method based on the fusion of multi-source sensor information under the linear model is as follows:

将线性模型、IMU信息和GPS信息作为输入,根据卡尔曼滤波算法将GPS位置信息与IMU信息进行融合并得到更准确的无人车位置信息,与真值进行对比;Taking the linear model, IMU information and GPS information as input, the GPS location information and IMU information are fused according to the Kalman filter algorithm to obtain more accurate location information of the unmanned vehicle, which is compared with the true value;

在步骤4中,非线性模型为恒定加速度与角速度的运动学模型。In step 4, the nonlinear model is a kinematic model of constant acceleration and angular velocity.

在步骤5中,运动学模型下的融合多源传感器信息的无人车定位方法具体为:In step 5, the unmanned vehicle positioning method based on the fusion of multi-source sensor information under the kinematic model is as follows:

分别采用拓展卡尔曼滤波算法、无迹卡尔曼滤波算法和粒子滤波算法完成融合定位,并进行综合比较,选取最优的定位算法。The extended Kalman filter algorithm, the unscented Kalman filter algorithm and the particle filter algorithm are respectively used to complete the fusion positioning, and comprehensive comparison is made to select the optimal positioning algorithm.

在步骤6中,单车模型比运动学模型具有更强的非线性,且与实际生活中车辆模型更相似。In step 6, the bicycle model has stronger nonlinearity than the kinematic model and is more similar to the real-life vehicle model.

在步骤7中,单车模型下的融合多源传感器信息的无人车定位方法具体为:In step 7, the method for locating the unmanned vehicle by fusing multi-source sensor information under the bicycle model is as follows:

针对公开数据集采用拓展卡尔曼滤波算法、无迹卡尔曼滤波算法和粒子滤波算法分析单车模型的非线性程度对无人车融合定位算法结果精确度的影响,并获取最优的定位算法。According to the public data set, the extended Kalman filter algorithm, the unscented Kalman filter algorithm and the particle filter algorithm are used to analyze the influence of the nonlinear degree of the bicycle model on the accuracy of the fusion positioning algorithm of the unmanned vehicle, and obtain the optimal positioning algorithm.

本发明中,通过GPS与IMU对待控制车辆进行数据采集,针对系统进行线性系统建模,采用卡尔曼滤波算法融合得到更为精确的定位结果,针对系统进行非线性建模并进行实车测试,在非线性系统下采用拓展卡尔曼滤波、无迹卡尔曼滤波以及粒子滤波将无人车定位精度明显提高,进一步扩大模型的非线性,采用单车模型进一步进行实验仿真并挑选出最优的滤波算法,本发明将GPS信息与IMU信息进行融合,解决了在简单环境下如何在自动驾驶中提高定位精度以控制车辆行驶的技术问题,且能够在非线性程度不同的系统中挑选最优的解决方式,适用性强。In the present invention, data collection is performed on the vehicle to be controlled by GPS and IMU, linear system modeling is performed for the system, a Kalman filter algorithm is used for fusion to obtain a more accurate positioning result, nonlinear modeling is performed on the system and a real vehicle test is performed, Under the nonlinear system, the extended Kalman filter, unscented Kalman filter and particle filter are used to significantly improve the positioning accuracy of the unmanned vehicle, and the nonlinearity of the model is further expanded. The single vehicle model is used for further experimental simulation and the optimal filtering algorithm is selected. , the present invention fuses GPS information and IMU information, solves the technical problem of how to improve the positioning accuracy in automatic driving in a simple environment to control the driving of the vehicle, and can select the optimal solution in systems with different degrees of nonlinearity , the applicability is strong.

如图1~图8所示,一种融合多源传感器信息的无人车定位方法,无人车定位方法包括以下步骤:获取待控制车辆的GPS信息与IMU信息,并获取待控制车辆结合惯性导航系统与GPS信号所获得的厘米级精度定位作为真值,用以与融合定位结果进行对比,建立线性的恒定加速度模型,并采用卡尔曼滤波算法实现GPS位置信息与IMU位置信息融合,与真值进行对比并且计算滤波前后方差,建立恒定加速度与角加速度的运动学模型,采用拓展卡尔曼滤波算法、无迹卡尔曼滤波算法、粒子滤波算法实现GPS信息与IMU信息的融合,计算RMS误差,并判断三种算法效果的优劣,进一步加深非线性并建立单车模型,通过公开数据集完成GPS信息与IMU信息融合,并针对坐标值和航向角进行误差分析。As shown in Figures 1 to 8, an unmanned vehicle positioning method integrating multi-source sensor information, the unmanned vehicle positioning method includes the following steps: obtaining GPS information and IMU information of the vehicle to be controlled, and obtaining the combined inertia of the vehicle to be controlled. The centimeter-level precision positioning obtained by the navigation system and the GPS signal is used as the true value to compare with the fusion positioning results, establish a linear constant acceleration model, and use the Kalman filter algorithm to realize the fusion of GPS position information and IMU position information, which is consistent with the true value. The kinematic model of constant acceleration and angular acceleration is established, and the extended Kalman filter algorithm, the unscented Kalman filter algorithm, and the particle filter algorithm are used to realize the fusion of GPS information and IMU information, and the RMS error is calculated. And judge the pros and cons of the three algorithms, further deepen the nonlinearity and build a bicycle model, complete the fusion of GPS information and IMU information through public data sets, and conduct error analysis for coordinate values and heading angles.

该发明能够将GPS信息和IMU信息进行优劣互补,实现在简单环境下获取更为精确的无人车定位效果,通过对非线性系统下三种滤波算法效果的比较,得到能够实现最为准确的无人车定位效果,在非线性不强的环境中采用拓展卡尔曼滤波算法,当环境的非线性程度加强时,采用无迹卡尔曼滤波算法与粒子滤波算法得到的更优,因此本发明针对非线性程度不同的环境下都能够实现良好的融合定位效果,适用性强。The invention can complement the advantages and disadvantages of GPS information and IMU information, and achieve a more accurate positioning effect of unmanned vehicles in a simple environment. For the positioning effect of unmanned vehicles, the extended Kalman filter algorithm is used in an environment with weak nonlinearity. When the degree of nonlinearity of the environment is strengthened, the unscented Kalman filter algorithm and particle filter algorithm are used to obtain better results. Therefore, the present invention aims at It can achieve good fusion positioning effect in environments with different degrees of nonlinearity, and has strong applicability.

以上,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的工作人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto. Any person familiar with the technical field within the technical scope disclosed by the present invention can easily think of various equivalent modifications or Replacement, these modifications or replacements should all be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (10)

1. An unmanned vehicle positioning method fusing multi-source sensor information is characterized by comprising the following steps:
step 1: acquiring GPS position information and IMU information of a vehicle to be controlled, and taking centimeter-level precision positioning obtained by combining the vehicle to be controlled with an inertial navigation system and a GPS signal as a true value to verify the fused positioning precision;
step 2: establishing a linear model according to the GPS position information and the IMU information;
and step 3: acquiring an unmanned vehicle positioning algorithm for fusing multi-source sensor information under a linear model, and acquiring a fusion positioning result under the linear model;
and 4, step 4: establishing a nonlinear model according to the GPS position information and the IMU information;
and 5: acquiring an unmanned vehicle positioning algorithm for fusing multi-source sensor information under a nonlinear model, and acquiring a fusion positioning result under the nonlinear model;
step 6: further deepening nonlinearity of the nonlinear model and modeling again to obtain a bicycle model;
and 7: and acquiring an unmanned vehicle positioning algorithm for fusing multi-source sensor information under the single vehicle model, and obtaining a fusion positioning result under the single vehicle model.
2. The method as claimed in claim 1, wherein in step 1, GPS information is collected through a GNSS antenna installed on a roof of a vehicle, and IMU information is collected through an inertial navigation unit, wherein the GPS information includes longitude and latitude information of the vehicle, and the longitude and latitude information of the vehicle is converted into an abscissa and an ordinate of the vehicle, and the IMU information includes three-axis angular velocity and acceleration of the vehicle.
3. The method according to claim 1, wherein in step 2, the linear model comprises a constant velocity model and a constant acceleration model, and the expression of the state vector of the constant velocity model is as follows:
Figure FDA0003494729100000011
wherein x, y, theta and v are respectively an abscissa, an ordinate, a vehicle course angle and a speed of the vehicle position at the time t;
the expressions of the state equation and the observation equation of the constant speed model are respectively as follows:
Figure FDA0003494729100000021
Figure FDA0003494729100000022
wherein, Delta t is a time interval,
Figure FDA0003494729100000023
for the state vector after the time at has elapsed,
Figure FDA0003494729100000024
for observations obtained by GPS, xgpsAs the abscissa, y, of the vehicle position obtained by GPSgpsIs the ordinate of the vehicle position obtained by the GPS;
the expression of the state vector of the constant acceleration model is:
Figure FDA0003494729100000025
wherein s and v are respectively the displacement and the speed of the vehicle at the time t;
the expressions of the state equation and the observation equation of the constant acceleration model are respectively as follows:
Figure FDA0003494729100000026
Figure FDA0003494729100000027
wherein, a is the acceleration of the vehicle,
Figure FDA0003494729100000028
is a state vector after the time of delta t, s (t) is the displacement of the vehicle at the time of t, v (t) is the speed of the vehicle at the time of t,
Figure FDA0003494729100000029
the observed value is the displacement measured by the GPS;
the constant acceleration model takes the values measured by the accelerometer as system inputs:
u(k)=a(k)
where u (k) is the system input and a (k) is the value measured by the accelerometer.
4. The unmanned aerial vehicle positioning method fused with multi-source sensor information according to claim 3, wherein in the step 3, the unmanned aerial vehicle positioning method fused with multi-source sensor information under a linear model specifically comprises:
step 301: obtaining more accurate unmanned vehicle position information according to a Kalman filtering algorithm, wherein the Kalman filtering algorithm comprises five steps of state prediction, error covariance prediction, Kalman gain updating, state updating and error covariance updating:
Figure FDA00034947291000000210
Figure FDA00034947291000000211
Figure FDA00034947291000000212
Figure FDA00034947291000000213
Figure FDA0003494729100000031
wherein ,
Figure FDA0003494729100000032
a predictor representing the state vector at time k,
Figure FDA0003494729100000033
represents the best estimate of the state vector at time k,
Figure FDA0003494729100000034
prediction value, P, representing the error covariance at time kkOptimal estimate, K, representing the error covariance at time KkRepresenting the Kalman gain at time k, Q being the process noise covariance matrix, R being the observation noise covariance matrix, uk-1The system input at the moment of k-1 is represented, A represents a state transition matrix of the system, H represents an observation matrix of the system, I represents a unit matrix, and B is a control input matrix;
for the constant speed model, the expressions of an observation matrix H and a state transition matrix A obtained according to the state equation and the observation equation of the constant speed model are respectively as follows:
Figure FDA0003494729100000035
Figure FDA0003494729100000036
wherein, delta t is a time interval, and theta is a heading angle of the vehicle;
initializing a covariance matrix P into an identity matrix, and initializing a process noise matrix Q and an observation noise matrix R:
Figure FDA0003494729100000037
Figure FDA0003494729100000038
for the constant acceleration model, the expressions of a state transition matrix A, an observation matrix H and a control input matrix B obtained according to the state equation, the observation value and the system input of the constant acceleration model are respectively as follows:
Figure FDA0003494729100000039
H=[1 0]
Figure FDA00034947291000000310
wherein Δ t is a time interval;
initializing the covariance matrix P as an identity matrix, and initializing a process noise matrix Q and an observation noise matrix R as follows:
Figure FDA00034947291000000311
R=1;
step 302: and comparing the obtained optimal estimation value with the true value, and calculating the variance before and after filtering.
5. The method according to claim 1, wherein in step 4, the nonlinear model is a kinematic model based on constant acceleration and angular velocity, and the expression of the state vector of the kinematic model is as follows:
Figure FDA0003494729100000041
wherein ,xk、yk、ψk、ωk、vk and akRespectively an abscissa, an ordinate, a course angle, an angular velocity, a speed and an acceleration of the vehicle at the moment k;
the expressions of the state equation and the observation equation of the kinematic model are respectively as follows:
Figure FDA0003494729100000042
Figure FDA0003494729100000043
wherein ,
Figure FDA0003494729100000044
is the state vector of the vehicle at time k-1, vk-1The speed of the vehicle at time k-1, dt is the time interval,
Figure FDA0003494729100000045
as an observed value, xg,k、yg,k、aa,k and ωg,kThe acceleration obtained by the accelerometer and the angular velocity measured by the gyroscope are respectively the abscissa and the ordinate of the vehicle observed by the GPS at the moment k.
6. The unmanned aerial vehicle positioning method fusing multi-source sensor information according to claim 1, wherein in the step 5, the process of obtaining the unmanned aerial vehicle positioning method fusing multi-source sensor information under the kinematic model specifically comprises the following steps:
step 501: performing fusion positioning by respectively adopting an extended Kalman algorithm, an unscented Kalman algorithm and a particle filter algorithm;
step 502: comprehensively comparing the three filtering algorithms, calculating RMS error, and selecting the optimal positioning algorithm, wherein the RMS error has the calculation formula:
errorrms=|X-Xest|
wherein, errorrmsX is the true value of the filtered error and includes the true values of the abscissa and ordinate of the vehicle, XestRespectively obtaining the optimal estimated values of the abscissa and the ordinate of the filtered vehicle;
step 503: and selecting a corresponding filtering algorithm with the minimum RMS error, and acquiring a fusion positioning result under the nonlinear model based on the filtering algorithm.
7. The unmanned aerial vehicle positioning method fusing multi-source sensor information according to claim 6, wherein in step 501, expanding a specific formula of a Kalman filtering algorithm comprises:
Figure FDA0003494729100000051
Figure FDA0003494729100000052
wherein ,Fk-1And HkRespectively, the Jacobian matrix, ukFor the system input at time k, xk-1Is the state vector at the moment of k-1;
jacobian matrix Fk-1 and HkAre respectively:
Figure FDA0003494729100000053
Figure FDA0003494729100000054
wherein ,ψk-1Is the heading angle, v, of the vehicle at time k-1k-1The speed of the vehicle at time k-1, dt is the time interval;
the specific filtering process of the unscented Kalman filtering algorithm is as follows:
step 1 a: initialization state vector x0State estimation error covariance P0A process noise matrix Q and an observation noise matrix R, for a kinematic model, the initial values of the state vectors
Figure FDA0003494729100000055
Is true value, the process noise matrix Q, the observation noise matrix R and the initial error covariance matrix P0The initialized expression is:
Figure FDA0003494729100000056
Figure FDA0003494729100000057
Figure FDA0003494729100000061
step 2 a: according to the state vector x at the previous momentkSum error covariance matrix PkSelecting a sampling point at the current moment and distributing weight;
step 3 a: selecting 2n +1 sampling points, calculating the mean value and covariance of all the sampling points at the current moment through a state equation of the system, and finishing time updating;
step 4 a: measuring and updating the sampling points through an observation equation of the system;
step 5 a: calculating Kalman gain through prediction and measurement updating, and finishing filtering updating;
the particle filter algorithm adopts a Monte Carlo method to solve the problem of nonlinear filtering, and the filtering process of the particle filter algorithm is as follows:
step 1 b: initializing N particles
Figure FDA0003494729100000062
And weight of the particles
Figure FDA0003494729100000063
Uniformly setting the ratio to 1/N;
and step 2 b: updating particles
Figure FDA0003494729100000064
Calculating likelihood probability of each particle
Figure FDA0003494729100000065
Weighting each particle
Figure FDA0003494729100000066
Is updated to
Figure FDA0003494729100000067
And carrying out normalization processing, wherein the weight value updating formula of the particles is as follows:
Figure FDA0003494729100000068
wherein ,
Figure FDA0003494729100000069
is the weight of the particle(s),
Figure FDA00034947291000000610
is the likelihood probability of a particle, zkAs an observed value of the system at time k,
Figure FDA00034947291000000611
is the output value, R, of the ith particle observation equation at time kkAn observed noise covariance matrix at time k;
and step 3 b: to pair
Figure FDA00034947291000000612
Resampling, and calculating effective particle number NeffThereby obtaining a new particle group
Figure FDA00034947291000000613
And the weight of the particles
Figure FDA00034947291000000614
Is 1/N;
and 4 b: and calculating the filtering state estimation and the variance estimation of the current moment by adopting the weight and the state of the resampled particles.
8. The unmanned aerial vehicle positioning method fused with multi-source sensor information as claimed in claim 1, wherein in step 6, the relation satisfied between the speed and the angular velocity in the single vehicle model is as follows:
Figure FDA0003494729100000071
Figure FDA0003494729100000072
Figure FDA0003494729100000073
wherein, beta is a slip angle,
Figure FDA0003494729100000074
the speed in the horizontal direction is the speed,
Figure FDA0003494729100000075
the speed in the vertical direction is the speed,
Figure FDA0003494729100000076
angular velocity as course angle, v as centroid speed, psi as course anglerAnd lfRespectively rear overhang length and front overhang length, deltafAnd deltarRespectively a front wheel deflection angle and a rear wheel deflection angle;
the slip angle β is calculated as:
Figure FDA0003494729100000077
wherein ,lrAnd lfRespectively rear overhang length and front overhang length, deltafAnd deltarRespectively a front wheel deflection angle and a rear wheel deflection angle;
the expression of the state vector of the bicycle model is:
Figure FDA0003494729100000078
wherein ,xk、yk、ψk、r、b、N、δr,k and ωkRespectively an abscissa, an ordinate, a course angle, a wheel radius, a vehicle wheel base, a gear ratio, a front wheel deflection angle and a wheel rotation angular speed of the vehicle at the moment k, wherein the wheel radius, the vehicle wheel base and the gear ratio are kept unchanged, the front wheel deflection angle and the wheel rotation angular speed of the vehicle are used as system input, and dt is a time interval;
the equation of state of the bicycle model is as follows:
Figure FDA0003494729100000079
the observed values of the bicycle model are:
Figure FDA00034947291000000710
wherein ,xg,k and yg,kRespectively an abscissa and an ordinate of the vehicle observed by the GPS at the moment k;
the observation equation of the bicycle model is as follows:
Figure FDA0003494729100000081
9. the method according to claim 8, wherein the step 7 of obtaining the unmanned vehicle positioning algorithm fused with the multi-source sensor information under the single vehicle model specifically comprises the following steps:
step 701: respectively obtaining the optimal estimation values of the vehicle under the bicycle model by adopting an extended Kalman algorithm, an unscented Kalman algorithm and a particle filter algorithm based on a public data set;
step 702: and (3) carrying out error analysis aiming at the coordinates and the heading angle of the vehicle, and calculating RMS errors corresponding to the abscissa, the ordinate and the heading angle of the vehicle, wherein the RMS error has a calculation formula as follows:
errorrms=|X-Xest|
wherein, errorrmsX is the true value of the filtered error, including the true values of the abscissa and ordinate of the vehicle, XestRespectively obtaining the optimal estimated values of the abscissa, the ordinate and the course angle of the filtered vehicle;
step 703: carrying out mean value processing on each RMS error to realize normalization;
step 704: adding the three normalized RMS errors of each filtering algorithm to form a score, evaluating the performance and effect of the three filtering algorithms by comparing the scores of the filtering algorithms, and further selecting the score with the minimum value as the optimal positioning algorithm;
step 705: and obtaining a fusion positioning result under the bicycle model based on the optimal positioning algorithm.
10. The method of claim 9, wherein in step 701, initial state vectors are obtained when the single vehicle model is filtered by a filtering algorithm
Figure FDA0003494729100000082
Taking a true value, taking the wheel radius R to be 0.425, taking the vehicle wheel base b to be 0.8, taking the gear ratio N to be 5, and taking the initialization process noise matrix Q, the observation noise matrix R and the initial error covariance matrix P0The expressions are respectively:
Figure FDA0003494729100000083
Figure FDA0003494729100000084
Figure FDA0003494729100000091
CN202210109679.2A 2022-01-29 2022-01-29 A positioning method for unmanned vehicles based on fusion of multi-source sensor information Active CN114440881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210109679.2A CN114440881B (en) 2022-01-29 2022-01-29 A positioning method for unmanned vehicles based on fusion of multi-source sensor information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210109679.2A CN114440881B (en) 2022-01-29 2022-01-29 A positioning method for unmanned vehicles based on fusion of multi-source sensor information

Publications (2)

Publication Number Publication Date
CN114440881A true CN114440881A (en) 2022-05-06
CN114440881B CN114440881B (en) 2023-05-30

Family

ID=81372107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210109679.2A Active CN114440881B (en) 2022-01-29 2022-01-29 A positioning method for unmanned vehicles based on fusion of multi-source sensor information

Country Status (1)

Country Link
CN (1) CN114440881B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116222544A (en) * 2023-05-09 2023-06-06 浙江大学湖州研究院 Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm
CN117804452A (en) * 2023-12-07 2024-04-02 盐城中科高通量计算研究院有限公司 Monte Carlo algorithm-based charging platform vehicle positioning method
CN118583153A (en) * 2024-05-08 2024-09-03 江苏理工学院 An improved method for simultaneous localization and map creation for unmanned vehicles
CN119338912A (en) * 2024-12-18 2025-01-21 无锡学院 Crane trolley positioning verification system and method based on visual detection

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2716597A1 (en) * 2010-10-08 2012-04-08 Canadian Space Agency Apparatus and method for driftless attitude determination and reliable localization of vehicles
JP2012173190A (en) * 2011-02-23 2012-09-10 Seiko Epson Corp Positioning system and positioning method
CN103439731A (en) * 2013-08-29 2013-12-11 北京空间飞行器总体设计部 GPS/INS integrated navigation method based on unscented Kalman filtering
CN108931799A (en) * 2018-07-18 2018-12-04 兰州交通大学 Train combined positioning method and system based on the search of recurrence fast orthogonal
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 Indoor and outdoor positioning method and system for unmanned vehicles based on multi-source fusion
CN110602647A (en) * 2019-09-11 2019-12-20 江南大学 Indoor fusion positioning method based on extended Kalman filtering and particle filtering
CN110956665A (en) * 2019-12-18 2020-04-03 中国科学院自动化研究所 Vehicle turning track bidirectional calculation method, system and device
CN113063429A (en) * 2021-03-18 2021-07-02 苏州华米导航科技有限公司 Self-adaptive vehicle-mounted combined navigation positioning method
CN113252033A (en) * 2021-06-29 2021-08-13 长沙海格北斗信息技术有限公司 Positioning method, positioning system and robot based on multi-sensor fusion

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2716597A1 (en) * 2010-10-08 2012-04-08 Canadian Space Agency Apparatus and method for driftless attitude determination and reliable localization of vehicles
JP2012173190A (en) * 2011-02-23 2012-09-10 Seiko Epson Corp Positioning system and positioning method
CN103439731A (en) * 2013-08-29 2013-12-11 北京空间飞行器总体设计部 GPS/INS integrated navigation method based on unscented Kalman filtering
CN108931799A (en) * 2018-07-18 2018-12-04 兰州交通大学 Train combined positioning method and system based on the search of recurrence fast orthogonal
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 Indoor and outdoor positioning method and system for unmanned vehicles based on multi-source fusion
CN110602647A (en) * 2019-09-11 2019-12-20 江南大学 Indoor fusion positioning method based on extended Kalman filtering and particle filtering
CN110956665A (en) * 2019-12-18 2020-04-03 中国科学院自动化研究所 Vehicle turning track bidirectional calculation method, system and device
CN113063429A (en) * 2021-03-18 2021-07-02 苏州华米导航科技有限公司 Self-adaptive vehicle-mounted combined navigation positioning method
CN113252033A (en) * 2021-06-29 2021-08-13 长沙海格北斗信息技术有限公司 Positioning method, positioning system and robot based on multi-sensor fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI XU等: "Multi-sensor fusion methodology for enhanced land vehicle positioning", 《INFORMATION FUSION》 *
VICENT GIRBÉS-JUAN等: "Asynchronous Sensor Fusion of GPS, IMU and CAN-Based Odometry for Heavy-Duty Vehicles", 《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY 》 *
周群等: "基于融合算法的GPS/UWB/MARG协同定位系统研究", 《现代电子技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116222544A (en) * 2023-05-09 2023-06-06 浙江大学湖州研究院 Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm
CN116222544B (en) * 2023-05-09 2023-08-04 浙江大学湖州研究院 Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm
CN117804452A (en) * 2023-12-07 2024-04-02 盐城中科高通量计算研究院有限公司 Monte Carlo algorithm-based charging platform vehicle positioning method
CN118583153A (en) * 2024-05-08 2024-09-03 江苏理工学院 An improved method for simultaneous localization and map creation for unmanned vehicles
CN119338912A (en) * 2024-12-18 2025-01-21 无锡学院 Crane trolley positioning verification system and method based on visual detection

Also Published As

Publication number Publication date
CN114440881B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN109991636B (en) Map construction method and system based on GPS, IMU and binocular vision
CN114440881A (en) Unmanned vehicle positioning method integrating multi-source sensor information
CN111307162B (en) Multi-sensor fusion positioning method for automatic driving scene
CN110501024B (en) Measurement error compensation method for vehicle-mounted INS/laser radar integrated navigation system
CN112505737B (en) GNSS/INS integrated navigation method
CN101173858B (en) A three-dimensional attitude determination and local positioning method for a lunar patrol probe
CN109000642A (en) A kind of improved strong tracking volume Kalman filtering Combinated navigation method
CN108362288B (en) Polarized light SLAM method based on unscented Kalman filtering
CN107728182B (en) Flexible multi-baseline measurement method and device based on camera assistance
CN111595592A (en) Performance evaluation method of adaptive cruise control system
CN111006675B (en) Self-calibration method of vehicle-mounted laser inertial navigation system based on high-precision gravity model
CN110057356B (en) Method and device for locating vehicle in tunnel
CN107600073B (en) A system and method for vehicle centroid sideslip angle estimation based on multi-source information fusion
CN108627152B (en) Navigation method of micro unmanned aerial vehicle based on multi-sensor data fusion
CN113063429A (en) Self-adaptive vehicle-mounted combined navigation positioning method
CN112346104A (en) A UAV Information Fusion Positioning Method
CN113008229B (en) Distributed autonomous integrated navigation method based on low-cost vehicle-mounted sensor
CN101900573A (en) A Method for Realizing Motion Alignment of Land-Used Inertial Navigation System
CN113503872B (en) Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU
CN115855048A (en) Multi-sensor fusion pose estimation method
CN103123487B (en) A kind of spacecraft attitude determination method
Park Optimal vehicle position estimation using adaptive unscented Kalman filter based on sensor fusion
CN111912427B (en) Method and system for aligning motion base of strapdown inertial navigation assisted by Doppler radar
CN114932909B (en) A Slope Estimation Method Based on Complementary Filtering to Implement Acceleration Correction
CN113029173A (en) Vehicle navigation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant