CN115577320A - Multi-sensor asynchronous data fusion method based on data interpolation - Google Patents

Multi-sensor asynchronous data fusion method based on data interpolation Download PDF

Info

Publication number
CN115577320A
CN115577320A CN202211263372.4A CN202211263372A CN115577320A CN 115577320 A CN115577320 A CN 115577320A CN 202211263372 A CN202211263372 A CN 202211263372A CN 115577320 A CN115577320 A CN 115577320A
Authority
CN
China
Prior art keywords
data
sensor
time
interpolation
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211263372.4A
Other languages
Chinese (zh)
Inventor
张利国
姚贤胜
辛乐
邓恒
李小龙
魏金碧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202211263372.4A priority Critical patent/CN115577320A/en
Publication of CN115577320A publication Critical patent/CN115577320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-sensor asynchronous data fusion processing method based on data interpolation, and designs a positioning frame comprising a dead reckoning module, a data buffering module, a time index module and a data interpolation module. And the track calculation module carries out pose estimation according to the motion dynamic model of the vehicle. And the sensor caches the measurement information through the data buffering module after acquiring the measurement information. The accurate matching of the multi-sensor measurement information time stamps is realized through the time index module, and the vehicle position information which is as accurate as possible is obtained through the data interpolation module and the state estimator. The effectiveness of the method is verified by building an unmanned simulation platform. Experimental results show that the positioning algorithm realized by the positioning frame provided by the invention has the advantage of high precision.

Description

一种基于数据插值的多传感器异步数据融合方法A Multi-sensor Asynchronous Data Fusion Method Based on Data Interpolation

技术领域technical field

本发明具体涉及一种基于数据插值的多传感器异步数据融合处理方法,属于多传感器融合领域。The invention specifically relates to a multi-sensor asynchronous data fusion processing method based on data interpolation, and belongs to the field of multi-sensor fusion.

背景技术Background technique

随着社会发展需求的牵引与现代科技进步的推动,智能车自动驾驶技术受到了学术科研与工程界的广泛关注,各种面向复杂应用背景的多传感器信息融合系统不断被研究与应用。在智能交通场景下,智能车自主运行的核心基础技术是自主定位。为了获得良好的定位效果,依靠单传感器提供的信息已经无法满足定位精度要求了,多传感器数据融合定位技术成为智能车领域重要研究方向之一,必须运用包括雷达,惯性测量单元(IMU)等各种传感器的多传感器信息融合系统,来获得多种观测数据,并通过融合处理来提供车辆位置信息,从而完成车辆定位的任务。With the traction of social development needs and the promotion of modern scientific and technological progress, intelligent vehicle automatic driving technology has received extensive attention from academic research and engineering circles, and various multi-sensor information fusion systems for complex application backgrounds have been continuously researched and applied. In the intelligent transportation scenario, the core basic technology for the autonomous operation of intelligent vehicles is autonomous positioning. In order to obtain a good positioning effect, relying on the information provided by a single sensor can no longer meet the positioning accuracy requirements. Multi-sensor data fusion positioning technology has become one of the important research directions in the field of smart vehicles. It must use radar, inertial measurement unit (IMU), etc. A multi-sensor information fusion system of various sensors is used to obtain a variety of observation data, and to provide vehicle position information through fusion processing, so as to complete the task of vehicle positioning.

但在多传感器系信息融合系统的实际工作中,系统本身和传感器的工作环境都是导致观测数据时间不同步的原因,而且在各种原因中有确定的也有随机的,有测量过程造成的,也有数据传输过程造成的。若未经时间配准就对多传感器数据信息进行融合,会产生较大的融合误差,进而影响多传感器系统的整体性能。However, in the actual work of the information fusion system of the multi-sensor system, the system itself and the working environment of the sensor are the reasons for the time asynchrony of the observed data, and among various reasons, there are certain and random, and some are caused by the measurement process. It is also caused by the data transmission process. If the multi-sensor data information is fused without time registration, large fusion errors will be generated, which will affect the overall performance of the multi-sensor system.

为了弥补上述缺点,对采样数据进行时间配准就逐渐成为了人们研究的热点。在时间配准技术研究中,对于由采样周期不同造成的采样数据不匹配的问题研究较多,得到了一些有效的方法。主要有内插外推法、最小二乘虚拟法、插值法、曲线拟合法和串行合并法。在实际应用中较常采用的是内插外推法和最小二乘虚拟法,但这两种方法在算法处理时间间隔内采用的目标运动模型为匀速直线运动,比较适合目标速度恒定或缓慢变化的情况。In order to make up for the above shortcomings, time registration of sampled data has gradually become a research hotspot. In the research of time registration technology, there are many studies on the problem of mismatching of sampling data caused by different sampling periods, and some effective methods have been obtained. There are mainly interpolation and extrapolation methods, least squares virtual method, interpolation method, curve fitting method and serial combination method. In practical applications, the interpolation and extrapolation method and the least squares virtual method are commonly used, but the target motion model adopted by these two methods in the algorithm processing time interval is uniform linear motion, which is more suitable for constant or slow change of target speed. Case.

因此,本发明提出了一种基于数据插值的多传感器异步数据融合处理方法,该方法将时间同步与数据融合相结合,是一种改进的定位算法。相对于传统的滤波算法,本发明提出的融合方法的最大不同之处在于:基于数据插值的多传感器异步数据融合处理方法能够精准匹配相同时间戳的多传感器数据,而不是采用ros中传统的时间戳相近策略。该融合处理方法的优势是可以准确地匹配多传感器数据的时间戳,提升整体融合效果,从而提高车辆自主定位的精度。Therefore, the present invention proposes a multi-sensor asynchronous data fusion processing method based on data interpolation, which combines time synchronization and data fusion, and is an improved positioning algorithm. Compared with the traditional filtering algorithm, the biggest difference of the fusion method proposed by the present invention is that the multi-sensor asynchronous data fusion processing method based on data interpolation can accurately match the multi-sensor data with the same time stamp, instead of using the traditional time in ros Poke similar strategies. The advantage of this fusion processing method is that it can accurately match the time stamps of multi-sensor data, improve the overall fusion effect, and thereby improve the accuracy of vehicle autonomous positioning.

发明内容Contents of the invention

本发明提出了一种基于数据插值的多传感器异步数据融合处理方法。在传感器异步采样时刻情况下,通过对车载传感器原始观测量时间戳进行时间索引及数据插值的方法来得到尽可能地精确匹配相同时间戳的其他传感器数据。本发明通过改进现有的多传感器数据融合算法来提高车辆自主定位的精度。The invention proposes a multi-sensor asynchronous data fusion processing method based on data interpolation. In the case of sensor asynchronous sampling time, other sensor data that match the same time stamp as accurately as possible can be obtained by time indexing and data interpolation on the original observation time stamp of the vehicle sensor. The invention improves the accuracy of vehicle autonomous positioning by improving the existing multi-sensor data fusion algorithm.

该发明主要包括航迹推算模块、时间索引模块,数据插值模块和数据融合模块。在本发明中,我们假设每个观测值的噪声相互独立,在每个采样时刻,数据插值模块可以获取航迹推算模块和数据缓冲模块中的数据,以实现采样数据的插值。具体实现步骤如下:The invention mainly includes a dead reckoning module, a time index module, a data interpolation module and a data fusion module. In the present invention, we assume that the noise of each observed value is independent of each other. At each sampling moment, the data interpolation module can obtain the data in the dead reckoning module and the data buffer module to realize the interpolation of the sampled data. The specific implementation steps are as follows:

步骤一、针对先验估计状态进行航迹推算Step 1. Perform dead reckoning for the prior estimated state

假设智能车的工作区域是理想的水平二维环境,系统的状态向量为智能车的位姿,已知k时刻智能车的位姿为xk=(Xk,Ykk)T其中Xk,Yk表示智能车几何中心的位置,θk表示智能车在导航坐标系中的姿态。智能车的位姿从机载坐标系到导航坐标系的位姿变换矩阵为:Assuming that the working area of the smart car is an ideal horizontal two-dimensional environment, the state vector of the system is the pose of the smart car, and the pose of the smart car at time k is known as x k = (X k , Y k , θ k ) T where X k , Y k represent the position of the geometric center of the smart car, and θ k represents the attitude of the smart car in the navigation coordinate system. The pose transformation matrix of the pose of the smart car from the airborne coordinate system to the navigation coordinate system is:

Figure BDA0003892101240000021
Figure BDA0003892101240000021

为了逐步估计移动机器人的位姿,在EKF中,使用轮式里程计的数据做状态预测,使用IMU提供的姿态数据做测量更新。In order to gradually estimate the pose of the mobile robot, in the EKF, the data of the wheel odometer is used for state prediction, and the attitude data provided by the IMU is used for measurement update.

这里使用轮式里程计为预测步提供控制量uk+1=(VOdo,X,VOdo,YOdo),智能车的运动模型(即位姿参数更新方程)为:Here, the wheel odometer is used to provide the control quantity u k+1 = (V Odo,X ,V Odo,YOdo ) for the prediction step, and the motion model of the smart car (that is, the pose parameter update equation) is:

Figure BDA0003892101240000022
Figure BDA0003892101240000022

根据运动模型预测k+1时刻的位姿估计值为:According to the motion model, the estimated value of the pose at time k+1 is:

Figure BDA0003892101240000023
Figure BDA0003892101240000023

式中

Figure BDA0003892101240000024
表示由运动模型预测的k+1时刻的位姿估计值。其中,符号^表示估计值,符号-表示预测值。In the formula
Figure BDA0003892101240000024
Indicates the pose estimate at time k+1 predicted by the motion model. Among them, the symbol ^ represents the estimated value, and the symbol - represents the predicted value.

预测状态向量先验估计值的协方差矩阵为:The covariance matrix of the prior estimate of the predicted state vector is:

Figure BDA0003892101240000031
Figure BDA0003892101240000031

运动模型的雅可比矩阵和运动噪声的雅可比矩阵分别为:The Jacobian matrix of the motion model and the Jacobian matrix of the motion noise are:

Figure BDA0003892101240000032
Figure BDA0003892101240000032

Figure BDA0003892101240000033
Figure BDA0003892101240000033

轮式里程计测量时,运动噪声典型值为wx=wy=0.1m/m,wθ=1°/m,所以When measuring the wheel odometer, the typical value of motion noise is w x =w y =0.1m/m, w θ =1°/m, so

Figure BDA0003892101240000034
Figure BDA0003892101240000034

步骤二、根据数据时间戳进行时间索引Step 2. Perform time indexing based on the data timestamp

在多传感器系统中,来自不同传感器系统的传感器数据是相对于参考时间基准进行时间校准的。参考时间值可以传送给传感器系统,以便使所述传感器系统基于所述的参考时间值标记传感器数据。可以通过对来自一个传感器系统的传感器数据应用校准策略来对该传感器数据进行时间校准。In a multi-sensor system, sensor data from different sensor systems are time-aligned with respect to a reference time reference. The reference time value may be communicated to the sensor system in order for the sensor system to tag sensor data based on said reference time value. Sensor data from a sensor system can be time-calibrated by applying a calibration strategy to the sensor data.

数据缓存模块将传感器数据按时间顺序独立的存放在各自的容器中,每个容器都相当于构建了一条时间线,首先我们需要选定一个核心传感器(里程计传感器),将核心传感器的数据时间戳作为参考时间。其次再对其他传感器的数据进行时间索引操作,即将里程计传感器采集时刻在其他传感器的时间线里找到对应的位置,并锁定前后帧的数据,以便继续进行数据插值处理。The data cache module stores sensor data independently in chronological order in their respective containers. Each container is equivalent to building a timeline. First, we need to select a core sensor (odometer sensor) and store the data time of the core sensor Stamp as reference time. Secondly, perform time indexing operation on the data of other sensors, that is, find the corresponding position in the timeline of other sensors at the acquisition time of the odometer sensor, and lock the data of the front and rear frames in order to continue the data interpolation process.

在遍历非核心传感器数据容器的数据时,理想情况就是让容器中第一个数据的时间戳比参考时间早,而第二个数据的时间戳比参考时间晚。当然,非核心传感器数据在进行时间索引时需要考虑一些异常情况,若非核心传感器数据出现丢帧或时间戳存在异常,可能会导致误差变大,甚至影响程序功能。因此,为了保证索引到的数据正确,我们还需要为它设定限制条件:When traversing the data in the non-core sensor data container, the ideal situation is to have the timestamp of the first data in the container be earlier than the reference time, and the timestamp of the second data in the container is later than the reference time. Of course, some abnormalities need to be considered when performing time indexing on non-core sensor data. If the non-core sensor data loses frames or the timestamp is abnormal, it may lead to larger errors and even affect program functions. Therefore, in order to ensure that the indexed data is correct, we also need to set restrictions for it:

条件一:若容器中第一个数据的时间戳晚于参考时间,则说明若在此处进行数据插值的话,无法获得参考时间的前一时刻的数据,即无从插入,应该退出遍历过程,选择核心传感器下一时刻的数据时间戳作为参考时间。Condition 1: If the timestamp of the first data in the container is later than the reference time, it means that if data interpolation is performed here, the data at the previous moment of the reference time cannot be obtained, that is, there is no way to insert, and the traversal process should be exited, select The data timestamp of the core sensor at the next moment is used as the reference time.

条件二:若满足容器中的第一个数据的时间戳早于参考时间这一限制条件,则必须保证容器中第二个数据的时间戳晚于参考时间,否则,第一个数据是无意义的,应继续遍历,并删除第一个数据。Condition 2: If the time stamp of the first data in the container is earlier than the reference time, the time stamp of the second data in the container must be later than the reference time, otherwise, the first data is meaningless , the traversal should continue and the first data should be deleted.

条件三:若满足条件一、条件二,但第一个数据的时间戳与参考时间的差值大于我们设定的阈值,则证明数据在该时刻可能存在数据丢帧的情况,应该退出遍历过程,选择核心传感器下一时刻的数据时间戳作为参考时间。Condition 3: If condition 1 and condition 2 are met, but the difference between the timestamp of the first data and the reference time is greater than the threshold we set, it proves that the data may have data frame loss at this moment, and the traversal process should be exited , select the data timestamp of the core sensor at the next moment as the reference time.

条件四:若满足条件一、条件二、条件三,但第二个数据的时间戳与参考时间的差值大于我们设定的阈值,则证明数据在该时刻可能存在数据丢帧的情况,应该退出遍历过程,选择核心传感器下一时刻的数据时间戳作为参考时间。Condition 4: If condition 1, condition 2, and condition 3 are met, but the difference between the timestamp of the second data and the reference time is greater than the threshold we set, it proves that there may be data frame loss at this moment in the data, and it should be Exit the traversal process, and select the data timestamp of the core sensor at the next moment as the reference time.

若遍历过程能同时满足上述四个限制条件,则可以认为完成了时间索引,应锁定该位置,以便进行数据插值。If the traversal process can meet the above four constraints at the same time, it can be considered that the time index is completed, and the position should be locked for data interpolation.

步骤三、根据相邻帧数据进行数据插值Step 3. Perform data interpolation based on adjacent frame data

步骤四完成了时间索引,并锁定位置后,数据插值模块基于球面矢量的四元数姿态插值法,利用该位置前后两帧的IMU数据实现姿态平滑插值。Step 4 After completing the time index and locking the position, the data interpolation module uses the quaternion attitude interpolation method based on the spherical vector, and uses the IMU data of the two frames before and after the position to realize the smooth interpolation of the attitude.

假设初始姿态与结束姿态之间过渡转动的四元数qm,可由单位球面上垂直于过渡转动转轴的两个旋转向量

Figure BDA0003892101240000041
表示,这两个旋转向量垂直于转轴
Figure BDA0003892101240000042
夹角为转角θ,一个向量旋转到另一个向量时,在单位球面上形成一段圆弧,这两个向量及圆弧就表示过渡转动的四元数qm。均匀分配圆弧上的点,则可以得到均匀分配得到一系列过渡转动四元素
Figure BDA0003892101240000043
与初始姿态叠加之后,得到一系列均匀的插值姿态
Figure BDA0003892101240000044
Assuming the quaternion q m of the transitional rotation between the initial posture and the final posture, two rotation vectors perpendicular to the transitional rotation axis on the unit sphere can be obtained
Figure BDA0003892101240000041
Indicates that the two rotation vectors are perpendicular to the rotation axis
Figure BDA0003892101240000042
The included angle is the rotation angle θ. When one vector rotates to another vector, a circular arc is formed on the unit sphere. These two vectors and the circular arc represent the quaternion q m of transitional rotation. Evenly distribute the points on the arc, then you can get a series of transition rotation four elements that are evenly distributed
Figure BDA0003892101240000043
After being superimposed with the initial pose, a series of uniform interpolated poses are obtained
Figure BDA0003892101240000044

下面给出四元数姿态插值的具体步骤。The specific steps of quaternion attitude interpolation are given below.

步骤1、由给定的初始姿态与结束姿态得到过渡转动的转轴矢量与转角,相关方程为qm=qs -1·qeStep 1. Obtain the rotation axis vector and rotation angle of the transitional rotation from the given initial posture and end posture, and the related equation is q m =q s −1 ·q e .

步骤2、由空间圆弧参数方程,在旋转圆弧上取一系列均匀的点,单位球面上的空间圆弧可表示为如下方程。Step 2. From the parametric equation of the space arc, a series of uniform points are taken on the rotation arc, and the space arc on the unit sphere can be expressed as the following equation.

Figure BDA0003892101240000051
Figure BDA0003892101240000051

其中

Figure BDA0003892101240000052
为过渡转动的转轴矢量,θ为转角,弧所在平面的法向矢量也就是
Figure BDA0003892101240000053
初始姿态与结束姿态给定时,这些值是定值。t为空间圆弧参数,均匀选取参数t,可以得到圆弧上一系列均匀的点。in
Figure BDA0003892101240000052
is the rotation axis vector of the transitional rotation, θ is the rotation angle, and the normal vector of the plane where the arc is located is
Figure BDA0003892101240000053
When the initial attitude and end attitude are given, these values are fixed values. t is the parameter of the arc in space, if the parameter t is uniformly selected, a series of uniform points on the arc can be obtained.

步骤3、由圆弧起点向量为

Figure BDA0003892101240000054
从圆弧起点到终点间均匀选取各点,得到一系列向量
Figure BDA0003892101240000055
对过渡转动的四元数qm均匀化,即Step 3. From the starting point vector of the arc to
Figure BDA0003892101240000054
Select points uniformly from the start point to the end point of the arc to get a series of vectors
Figure BDA0003892101240000055
Homogenize the quaternion q m of transition rotation, namely

Figure BDA0003892101240000056
Figure BDA0003892101240000056

并将转轴与向量正交的四元数

Figure BDA0003892101240000057
的格式转换成非正交的普通格式。A quaternion with the rotation axis orthogonal to the vector
Figure BDA0003892101240000057
format into a non-orthogonal common format.

步骤4、得到最终的一系列四元数姿态插值,即:Step 4. Obtain the final series of quaternion attitude interpolation values, namely:

Figure BDA0003892101240000058
Figure BDA0003892101240000058

步骤四、搭建扩展卡尔曼滤波器进行数据融合Step 4. Build an extended Kalman filter for data fusion

在智能车联网中,每辆智能车上都配备有惯性测量单元(IMU),我们使用多传感器模块来获取车载传感器的原始状态观测信息作为状态观测空间。k+1时刻IMU的姿态测量信息θk+1,IMU,此时观测模型为:In the smart car network, each smart car is equipped with an inertial measurement unit (IMU), and we use a multi-sensor module to obtain the original state observation information of the on-board sensors as a state observation space. The attitude measurement information θ k+1,IMU of the IMU at time k+1, the observation model at this time is:

Zk+1=θk+1+vk+1 (11)Z k+1 =θ k+1 +v k+1 (11)

在每一时刻,惯性测量单元和里程计都会返回对同一参数(姿态角)的复数观测量,针对这一情况,我们使用卡尔曼增益来计算一个对这两个传感器观测值不确定性的信赖值。k+1时刻的卡尔曼增益为:At each instant, the inertial measurement unit and the odometry return complex observations of the same parameter (attitude angle), for which we use the Kalman gain to compute a reliance on the uncertainty of the two sensor observations value. The Kalman gain at time k+1 is:

Figure BDA0003892101240000059
Figure BDA0003892101240000059

其中

Figure BDA00038921012400000510
是一个单位矩阵,是观测噪声的协方差矩阵,一般由厂家给出精度或者通过实验统计进行噪声评估,这里先假设in
Figure BDA00038921012400000510
is an identity matrix, which is the covariance matrix of the observation noise. Generally, the accuracy is given by the manufacturer or the noise is evaluated through experimental statistics. Here we first assume that

Rk+1=σθ (13)R k+1 = σ θ (13)

其中σθ表示IMU输出的关于姿态的观测噪声方差。where σ θ denotes the variance of the observation noise output by the IMU with respect to the pose.

所以:so:

Figure BDA0003892101240000061
Figure BDA0003892101240000061

状态变量的后验估计为:The posterior estimate of the state variable is:

Figure BDA0003892101240000062
Figure BDA0003892101240000062

状态变量后验估计值的协方差矩阵为:The covariance matrix of the posterior estimates of the state variables is:

Figure BDA0003892101240000063
Figure BDA0003892101240000063

附图说明Description of drawings

图1为基于数据插值的多传感器异步数据融合处理方法流程图。Fig. 1 is a flowchart of a multi-sensor asynchronous data fusion processing method based on data interpolation.

图2为移动机器人turtlebot3运动学模型。Figure 2 is the kinematics model of the mobile robot turtlebot3.

图3为无人驾驶仿真平台结构图。Figure 3 is a structural diagram of the unmanned driving simulation platform.

图4为未采取时间同步策略的融合算法轨迹图。Figure 4 is a trajectory diagram of the fusion algorithm without adopting the time synchronization strategy.

图5为采取不同时间同步策略的融合算法轨迹图。Figure 5 is a trajectory diagram of the fusion algorithm adopting different time synchronization strategies.

图6为各对照组偏航角变化对比图。Figure 6 is a comparison chart of yaw angle changes in each control group.

具体实施方式detailed description

以下将结合图例对本发明的基于数据插值的多传感器异步数据的融合处理方法做进一步的详细描述。The method for fusion processing of multi-sensor asynchronous data based on data interpolation of the present invention will be further described in detail below with reference to illustrations.

本发明的算法模型基于一定的假设,所述假设包括:The algorithmic model of the present invention is based on certain assumptions, and said assumptions include:

假设1:控制车辆均为智能车,均具有感知、计算、控制、通信等功能;Assumption 1: The controlled vehicles are all intelligent vehicles, which have functions such as perception, calculation, control, and communication;

假设2:假设车辆传感器的观测量相互独立;Assumption 2: Assume that the observations of the vehicle sensors are independent of each other;

假设3:假设每个观测量的噪声服从高斯分布;Hypothesis 3: Assume that the noise of each observation follows a Gaussian distribution;

本发明实例是通过移动机器人在室内模拟车辆的驾驶行为,仅依靠车载传感器来获取车辆位姿信息。通过设置实验对照组,并配合Matlab软件进行相关的编程计算,从而达到验证移动机器人定位系统性能的目的。The example of the present invention is to use the mobile robot to simulate the driving behavior of the vehicle indoors, and only rely on the vehicle sensor to obtain the vehicle pose information. By setting up the experimental control group and performing related programming calculations with Matlab software, the purpose of verifying the performance of the mobile robot positioning system is achieved.

步骤1、实验平台的搭建Step 1. Construction of the experimental platform

为了模拟智能车联网中车辆正常驾驶的场景,本实例选用Turtlebot3小车搭建了无人驾驶仿真平台。Turtlebot3是TurtleBot系列中的第三代产品,它采用机器人智能驱动器Dynamixel驱动,是一款小型的、可编程的、基于ROS的高性价比移动机器人,其运动学模型如图2所示。Turtlebot3机器人配备了惯性测量单元(IMU)和里程计(Odom),通过内置系统功能包可以实时获取车辆的速度,角速度和加速度等状态信息。IMU的工作频率为100Hz,Odom的工作频率为10Hz。根据车辆的差速轮运动学模型,本实例选择了如下的割线模型作为移动机器人的运动学方程:In order to simulate the normal driving scene of the vehicle in the intelligent Internet of Vehicles, this example uses the Turtlebot3 car to build an unmanned driving simulation platform. Turtlebot3 is the third-generation product in the TurtleBot series. It is driven by the robot intelligent driver Dynamixel. It is a small, programmable, ROS-based cost-effective mobile robot. Its kinematic model is shown in Figure 2. The Turtlebot3 robot is equipped with an inertial measurement unit (IMU) and an odometer (Odom). Through the built-in system function package, the vehicle's speed, angular velocity, acceleration and other state information can be obtained in real time. The IMU operates at 100Hz and the Odom operates at 10Hz. According to the differential wheel kinematics model of the vehicle, this example selects the following secant model as the kinematics equation of the mobile robot:

Figure BDA0003892101240000071
Figure BDA0003892101240000071

其中,k为当前的时间戳,x,y为当前车辆的坐标,θ为当前车辆的航向角,Δt为相邻两时刻间的时间间隔。Among them, k is the current timestamp, x, y are the coordinates of the current vehicle, θ is the heading angle of the current vehicle, and Δt is the time interval between two adjacent moments.

步骤2、车辆状态估计Step 2. Vehicle state estimation

无人驾驶平台依赖ROS系统,通过编写对应的功能包,完成对于系统环境的实时监测和的驾驶行为控制,平台结构如图3所示。The unmanned driving platform relies on the ROS system, and completes real-time monitoring of the system environment and driving behavior control by writing the corresponding function package. The platform structure is shown in Figure 3.

如上文所述,本实例选用车辆位置、车辆姿态作为车辆状态量,所以估计向量X=[xyθ]TAs mentioned above, this example selects the vehicle position and vehicle attitude as the vehicle state quantity, so the estimated vector X=[xyθ] T .

为了得到各传感器的噪声信息,在实验开开始前,将车辆静置于场地,对其进行长时间,等间隔采样,并对数据进行计算分析,得到车辆各状态量如表1所示的结果。In order to obtain the noise information of each sensor, before the experiment started, the vehicle was placed statically in the field, and it was sampled at equal intervals for a long time, and the data was calculated and analyzed, and the results of each state quantity of the vehicle were shown in Table 1. .

表1噪声参数Table 1 Noise parameters

Figure BDA0003892101240000072
Figure BDA0003892101240000072

当使用里程计在车辆静止状态下获取速度信息时,里程计存在约为5*10-4m/s的固定噪声,使车辆以0.1m/s的恒定速度进行匀速运动,可以测得速度观测量误差的标准差约为3*10-3m/s。在对车辆位置进行估计时,将上述偏差值作为噪声的标准差带入状态估计器进行估计。When the odometer is used to obtain speed information when the vehicle is stationary, there is a fixed noise of about 5*10-4m/s in the odometer, so that the vehicle moves at a constant speed of 0.1m/s, and the speed observation can be measured The standard deviation of the error is about 3*10-3m/s. When estimating the vehicle position, the above deviation value is taken as the standard deviation of the noise into the state estimator for estimation.

步骤3、定位精度测试Step 3. Positioning accuracy test

为了验证基于轮式里程计和IMU融合定位算法的定位精度,在移动机器人ROS系统发布8个采样目标点a(2.000,0.000)、b(2.000,1.000)、c(0.000,1.000)、d(0.000,2.000)、e(2.000,2.000)、f(2.000,3.000)、g(1.000,3.000)、h(1.000,1.000),智能车从坐标系原点出发依次经过这八个目标点,各目标点之间运动轨迹由导航算法自主规划,采用目标点的定位精度可以估计运动过程的定位精度。各定位方法的轨迹如图4、图5所示,图4中,对照组①是未采取时间同步策略的融合定位算法的轨迹;odom轨迹至的是轮式里程计的轨迹。黑色轨迹为期望轨迹。图5中,对照组②是采取时间戳相近的时间同步策略的融合定位算法的轨迹,对照组③是采取数据插值的时间同步策略的融合定位算法的轨迹。不同轨迹中各点位具体坐标值如表2所示,采用标准差计算三种时间同步策略方式的定位精度,精度值如表3所示。In order to verify the positioning accuracy based on the wheel odometer and IMU fusion positioning algorithm, eight sampling target points a(2.000,0.000), b(2.000,1.000), c(0.000,1.000), d( 0.000,2.000), e(2.000,2.000), f(2.000,3.000), g(1.000,3.000), h(1.000,1.000), the smart car starts from the origin of the coordinate system and passes through these eight target points in turn, each target The movement trajectory between points is independently planned by the navigation algorithm, and the positioning accuracy of the target point can be used to estimate the positioning accuracy of the movement process. The trajectories of each positioning method are shown in Figure 4 and Figure 5. In Figure 4, the control group ① is the trajectory of the fusion positioning algorithm without time synchronization strategy; the odom trajectory is the trajectory of the wheel odometer. The black track is the desired track. In Figure 5, the control group ② is the trajectory of the fusion positioning algorithm adopting the time synchronization strategy with similar time stamps, and the control group ③ is the trajectory of the fusion positioning algorithm adopting the time synchronization strategy of data interpolation. The specific coordinate values of each point in different trajectories are shown in Table 2. The standard deviation is used to calculate the positioning accuracy of the three time synchronization strategies, and the accuracy values are shown in Table 3.

表2不同轨迹中各点位的坐标值Table 2 Coordinate values of each point in different trajectories

Figure BDA0003892101240000081
Figure BDA0003892101240000081

表3各种时间同步策略下采样坐标点的精度Table 3 Accuracy of sampling coordinate points under various time synchronization strategies

Figure BDA0003892101240000082
Figure BDA0003892101240000082

步骤4、偏航角变化测试Step 4. Yaw Angle Change Test

为了验证本发明对姿态估计的有效性,记录步骤3中三组对照组的实验数据,并绘制偏航角变化对比曲线,结果如图6所示,各对照组的偏航角角度值如表三所示。可以看出在车辆运行的初始阶段,即走向点1的过程,由于不存在姿态角的变化,所以3组对照组的定位效果差别不大。此后车辆在转向过程中,姿态角发生了变化,对照组①不断累积误差,在点位3处偏航角误差已经超过10°,随着车辆的继续行驶,对照组①的偏航角逐渐失去意义,而对照组②、对照组③的偏航角数据值都可以较为准确地反应实际变化(图中的瞬间跳变是由于-180°到180°之间的微小变化造成),因此可以表明基于球面矢量的四元数姿态插值法对姿态融合的有效性。In order to verify the effectiveness of the present invention to attitude estimation, record the experimental data of three groups of control groups in step 3, and draw the yaw angle change contrast curve, the result is as shown in Figure 6, and the yaw angle angle values of each control group are as shown in the table three shown. It can be seen that in the initial stage of vehicle operation, that is, the process of moving towards point 1, since there is no change in attitude angle, the positioning effects of the three control groups are not much different. Afterwards, the attitude angle of the vehicle changed during the steering process. The control group ① continued to accumulate errors, and the yaw angle error at point 3 exceeded 10°. As the vehicle continued to drive, the yaw angle of the control group ① gradually lost However, the yaw angle data values of the control group ② and the control group ③ can reflect the actual change more accurately (the instantaneous jump in the figure is caused by a small change between -180° and 180°), so it can be shown that Effectiveness of quaternion pose interpolation based on spherical vectors for pose fusion.

表3不同轨迹中各点位的偏航角值Table 3 Yaw angle values of each point in different trajectories

Figure BDA0003892101240000091
Figure BDA0003892101240000091

步骤5、实验结果Step 5. Experimental results

本发明提出了一种基于数据插值的多传感器数据融合方法,以解决车辆异步传感器数据融合的问题。我们将数据插值和位姿估计相结合,使得多传感器融合算法需要的数据和缓存区内的数据是相匹配时刻的数据。通过实验平台,验证了所提出的数据插值方法的有效性,实验结果表明了按照本发明提出的定位框架实现的定位算法具有精度高的优点。The invention proposes a multi-sensor data fusion method based on data interpolation to solve the problem of vehicle asynchronous sensor data fusion. We combine data interpolation and pose estimation, so that the data required by the multi-sensor fusion algorithm and the data in the buffer area are the data at the matching time. The effectiveness of the proposed data interpolation method is verified through an experimental platform, and the experimental results show that the positioning algorithm implemented according to the positioning framework proposed by the present invention has the advantage of high precision.

Claims (6)

1. A multi-sensor asynchronous data fusion processing method based on data interpolation is characterized in that: the system comprises a dead reckoning module, a data buffering module, a time index module and a data interpolation module; the dead reckoning module carries out pose estimation according to a motion dynamic model of the vehicle and inputs the pose estimation to the state estimator as prior estimation; the data buffer module is used for receiving the measurement information of each sensor of the vehicle at the current moment and storing the measurement information into a corresponding data buffer container according to the time sequence; the time index module traverses the sensor data in the data buffer container according to the reference time and locks the position; the data interpolation module utilizes the front frame data and the rear frame data of the locking position to realize the data interpolation of the sensor, and the data after the interpolation is used as the observation information to be input into the state estimator, so as to realize the posterior estimation of the vehicle position.
2. A multi-sensor asynchronous data fusion processing method based on data interpolation by using the system of claim 1, characterized in that: storing IMU sensor data with a timestamp in a data buffer container, selecting an odometer sensor as a core sensor, taking the data timestamp of the odometer as reference time, and traversing observation data, namely IMU information, in the data buffer container to realize accurate matching of the sensor data in time; the data type of the estimated state of the dead reckoning module comprises position coordinates and a yaw angle; the data type of the dead reckoning module comprises vehicle running data and vehicle state information measured by a vehicle-mounted sensor, and an observation model of the vehicle is obtained as follows:
Z k+1 =θ k+1 +v k+1 (1)
wherein Z is k+1 Represents the systematic observation at time k +1, θ k+1 Attitude vector information, v, representing the IMU k+1 Representing the observed noise at time k + 1.
3. The multi-sensor asynchronous data fusion processing method based on data interpolation according to claim 2, characterized in that: the dead reckoning module is a hybrid extended Kalman filtering structure, and the dead reckoning process is realized as follows:
s1, establishing a corresponding state space equation and an observation equation according to a dynamic model of a vehicle and carrying out prior prediction value;
s2, performing time indexing according to the data time stamp of the measurement information;
s3, carrying out data interpolation according to adjacent frame data and processing system observed quantity Z k+1
And S4, according to the priori predicted value obtained in the S2 and the system view measurement obtained in the S3, an extended Kalman filter is built to realize the estimation of the position and the attitude of the vehicle.
4. The multi-sensor asynchronous data fusion processing method based on data interpolation according to claim 3, characterized in that: according to the dynamic model of each vehicle, estimating the state quantity of the vehicle by using extended Kalman filtering, wherein the equation is as follows:
Figure FDA0003892101230000011
wherein x is k =[X k ,Y kk ] T Indicating the pose of the vehicle at time k, X k ,Y k Indicating the position of the vehicle, theta k Representing the attitude of the vehicle in the navigation coordinate system;
Figure FDA0003892101230000021
representing the pose estimation value at the k +1 moment predicted by the motion model; u. u k+1 =[V Odo,X ,V Odo,Y ,w Odo ] T Representing the control quantity provided by the wheel type odometer for predicting the steps; the symbol "^" represents an estimated value, and the symbol "-" represents a predicted value;
Figure FDA0003892101230000022
wherein, P k And Q k+1 Are respectively the attitude x k And process noise w k+1 The covariance matrix of (a);
Figure FDA0003892101230000023
and
Figure FDA0003892101230000024
respectively a Jacobian matrix of the motion model and a Jacobian matrix of the process noise;
Figure FDA0003892101230000025
wherein,
Figure FDA0003892101230000026
jacobian matrix representing an observation model, R k+1 A covariance matrix representing observed noise;
R k+1 =σ θ (5)
wherein σ θ Observed noise variance with respect to attitude representing the IMU output;
Figure FDA0003892101230000027
Figure FDA0003892101230000028
at the moment k, the equation (2) carries out state estimation through a kinematic model and a pose transformation matrix of the vehicle to obtain a priori predicted value at the moment k +1
Figure FDA0003892101230000029
Equation (3) is used to calculate the covariance matrix of the prior estimates for the state estimator at time k +1, and equation (4) is based on the error covariance matrix of the prior estimates
Figure FDA00038921012300000210
And observed quantity error covariance matrix R k+1 To obtain the Kalman gain K at the moment of K +1 k+1 (ii) a The observed noise variance R of the sensor assumed by equation (5) k+1 (ii) a Equation (6) by observed quantity Z k+1 And a priori predicted values
Figure FDA00038921012300000211
Obtaining the state estimation value of the vehicle at the k +1 moment
Figure FDA00038921012300000212
Equation (7) passes the Kalman gain K at time K +1 k+1 Error covariance matrix with a priori predicted values
Figure FDA00038921012300000213
Obtaining an estimation error covariance matrix P at the moment of k +1 k+1
5. The multi-sensor asynchronous data fusion processing method based on data interpolation according to claim 1, characterized in that: the data buffer module can independently store the sensor data in respective containers according to a time sequence, and the time index module can take the time stamp of the core sensor data as reference time and traverse in the containers storing other sensor data until finding a corresponding position; the rules for traversal are as follows:
the first condition is as follows: if the time stamp of the first data in the container is later than the reference time, it indicates that if data interpolation is performed here, the data at the previous moment of the reference time cannot be obtained, i.e. no slave insertion occurs, the traversal process exits, and the data time stamp at the next moment of the core sensor is selected as the reference time;
and (2) carrying out a second condition: if the constraint condition that the time stamp of the first data in the container is earlier than the reference time is met, the time stamp of the second data in the container must be ensured to be later than the reference time, otherwise, the first data is meaningless, the traversal is continued, and the first data is deleted;
and (3) carrying out a third condition: if the first condition and the second condition are met, but the difference value between the timestamp of the first data and the reference time is larger than the set threshold value, the data is proved to have the situation of data frame loss at the moment, the traversal process is quitted, and the data timestamp of the next moment of the core sensor is selected as the reference time;
and a fourth condition: if the conditions I, II and III are met, but the difference value between the timestamp of the second data and the reference time is larger than the set threshold value, the data is proved to have the situation of data frame loss at the moment, the traversal process is quitted, and the data timestamp of the next moment of the core sensor is selected as the reference time;
if the traversal process can simultaneously satisfy the four limiting conditions, the time index is considered to be completed, and the position is locked for data interpolation.
6. The multi-sensor asynchronous data fusion processing method based on data interpolation according to claim 1, characterized in that: the data interpolation module selects a corresponding interpolation method for interpolation according to the front frame data and the rear frame data of the locking position of the time index module so as to achieve accurate matching of the multi-sensor data in time; for the sensor data of the inertial measurement unit, a quaternion attitude interpolation method based on a spherical vector is selected for data interpolation, and an interpolation equation is as follows:
q m =q s -1 ·q e (8)
Figure FDA0003892101230000031
Figure FDA0003892101230000032
Figure FDA0003892101230000033
wherein equation (8) is given by the initial pose q s And ending attitude q e To obtain a transitional rotation q m (ii) a Uniformly selecting a series of points on a spatial arc on a unit spherical surface given by equation (9) according to a uniform selection parameter t; equation (10) by uniformly selecting points
Figure FDA0003892101230000034
For quaternion q of transitional rotation m Homogenizing; equation (11) by initial attitude q s And transitional rotation q m To obtain a series of interpolation of quaternion poses.
CN202211263372.4A 2022-10-15 2022-10-15 Multi-sensor asynchronous data fusion method based on data interpolation Pending CN115577320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211263372.4A CN115577320A (en) 2022-10-15 2022-10-15 Multi-sensor asynchronous data fusion method based on data interpolation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211263372.4A CN115577320A (en) 2022-10-15 2022-10-15 Multi-sensor asynchronous data fusion method based on data interpolation

Publications (1)

Publication Number Publication Date
CN115577320A true CN115577320A (en) 2023-01-06

Family

ID=84584184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211263372.4A Pending CN115577320A (en) 2022-10-15 2022-10-15 Multi-sensor asynchronous data fusion method based on data interpolation

Country Status (1)

Country Link
CN (1) CN115577320A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034201A (en) * 2023-10-08 2023-11-10 东营航空产业技术研究院 Multi-source real-time data fusion method
CN117490705A (en) * 2023-12-27 2024-02-02 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034201A (en) * 2023-10-08 2023-11-10 东营航空产业技术研究院 Multi-source real-time data fusion method
CN117490705A (en) * 2023-12-27 2024-02-02 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium
CN117490705B (en) * 2023-12-27 2024-03-22 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium

Similar Documents

Publication Publication Date Title
Spica et al. A real-time game theoretic planner for autonomous two-player drone racing
Wischnewski et al. Vehicle dynamics state estimation and localization for high performance race cars
Kang et al. Vins-vehicle: A tightly-coupled vehicle dynamics extension to visual-inertial state estimator
CN115577320A (en) Multi-sensor asynchronous data fusion method based on data interpolation
US20240312061A1 (en) High-precision odometry estimation method based on double-layer filtering framework
CN113819905B (en) Mileage metering method and device based on multi-sensor fusion
CN111220153A (en) Positioning method based on visual topological node and inertial navigation
CN108759822B (en) Mobile robot 3D positioning system
Sanchez-Lopez et al. Visual marker based multi-sensor fusion state estimation
CN114993285A (en) Two-dimensional laser radar mapping method based on four-wheel omnidirectional all-wheel-drive mobile robot
Yin et al. Combinatorial inertial guidance system for an automated guided vehicle
Zhang et al. Self-positioning for mobile robot indoor navigation based on wheel odometry, inertia measurement unit and ultra wideband
CN115752507A (en) Online single-steering-wheel AGV parameter calibration method and system based on two-dimensional code navigation
Feng et al. Image-based trajectory tracking through unknown environments without absolute positioning
Azizi et al. Mobile robot position determination using data from gyro and odometry
CN112389438A (en) Method and device for determining transmission ratio of vehicle steering system
Housein et al. Extended Kalman filter sensor fusion in practice for mobile robot localization
Sert et al. Localizability of unicycle mobiles robots: An algebraic point of view
CN114638902B (en) On-line estimation method for external parameters of vehicle-mounted camera
CN117075158A (en) Position and orientation estimation method and system of unmanned deformable motion platform based on lidar
Song et al. Gps-aided visual wheel odometry
Björnberg Shared control for vehicle teleoperation with a virtual environment interface
Xu et al. EKF-based positioning study of a mobile robot with McNamee wheels
Lin et al. Realization of Ackermann robot obstacle avoidance navigation based on Multi-sensor fusion SLAM
Dahal et al. Vehicle State Estimation Through Modular Factor Graph-Based Fusion of Multiple Sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination