CN115577320A - Multi-sensor asynchronous data fusion method based on data interpolation - Google Patents
Multi-sensor asynchronous data fusion method based on data interpolation Download PDFInfo
- Publication number
- CN115577320A CN115577320A CN202211263372.4A CN202211263372A CN115577320A CN 115577320 A CN115577320 A CN 115577320A CN 202211263372 A CN202211263372 A CN 202211263372A CN 115577320 A CN115577320 A CN 115577320A
- Authority
- CN
- China
- Prior art keywords
- data
- sensor
- time
- interpolation
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000007499 fusion processing Methods 0.000 claims abstract description 15
- 238000005259 measurement Methods 0.000 claims abstract description 15
- 230000033001 locomotion Effects 0.000 claims abstract description 14
- 230000003139 buffering effect Effects 0.000 claims abstract 3
- 239000011159 matrix material Substances 0.000 claims description 18
- 239000013598 vector Substances 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 238000003780 insertion Methods 0.000 claims 1
- 230000037431 insertion Effects 0.000 claims 1
- 230000009897 systematic effect Effects 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 14
- 230000008901 benefit Effects 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 abstract description 3
- 238000004088 simulation Methods 0.000 abstract description 3
- 230000004927 fusion Effects 0.000 description 20
- 230000008859 change Effects 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013213 extrapolation Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 240000007643 Phytolacca americana Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Operations Research (AREA)
- Computing Systems (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
技术领域technical field
本发明具体涉及一种基于数据插值的多传感器异步数据融合处理方法,属于多传感器融合领域。The invention specifically relates to a multi-sensor asynchronous data fusion processing method based on data interpolation, and belongs to the field of multi-sensor fusion.
背景技术Background technique
随着社会发展需求的牵引与现代科技进步的推动,智能车自动驾驶技术受到了学术科研与工程界的广泛关注,各种面向复杂应用背景的多传感器信息融合系统不断被研究与应用。在智能交通场景下,智能车自主运行的核心基础技术是自主定位。为了获得良好的定位效果,依靠单传感器提供的信息已经无法满足定位精度要求了,多传感器数据融合定位技术成为智能车领域重要研究方向之一,必须运用包括雷达,惯性测量单元(IMU)等各种传感器的多传感器信息融合系统,来获得多种观测数据,并通过融合处理来提供车辆位置信息,从而完成车辆定位的任务。With the traction of social development needs and the promotion of modern scientific and technological progress, intelligent vehicle automatic driving technology has received extensive attention from academic research and engineering circles, and various multi-sensor information fusion systems for complex application backgrounds have been continuously researched and applied. In the intelligent transportation scenario, the core basic technology for the autonomous operation of intelligent vehicles is autonomous positioning. In order to obtain a good positioning effect, relying on the information provided by a single sensor can no longer meet the positioning accuracy requirements. Multi-sensor data fusion positioning technology has become one of the important research directions in the field of smart vehicles. It must use radar, inertial measurement unit (IMU), etc. A multi-sensor information fusion system of various sensors is used to obtain a variety of observation data, and to provide vehicle position information through fusion processing, so as to complete the task of vehicle positioning.
但在多传感器系信息融合系统的实际工作中,系统本身和传感器的工作环境都是导致观测数据时间不同步的原因,而且在各种原因中有确定的也有随机的,有测量过程造成的,也有数据传输过程造成的。若未经时间配准就对多传感器数据信息进行融合,会产生较大的融合误差,进而影响多传感器系统的整体性能。However, in the actual work of the information fusion system of the multi-sensor system, the system itself and the working environment of the sensor are the reasons for the time asynchrony of the observed data, and among various reasons, there are certain and random, and some are caused by the measurement process. It is also caused by the data transmission process. If the multi-sensor data information is fused without time registration, large fusion errors will be generated, which will affect the overall performance of the multi-sensor system.
为了弥补上述缺点,对采样数据进行时间配准就逐渐成为了人们研究的热点。在时间配准技术研究中,对于由采样周期不同造成的采样数据不匹配的问题研究较多,得到了一些有效的方法。主要有内插外推法、最小二乘虚拟法、插值法、曲线拟合法和串行合并法。在实际应用中较常采用的是内插外推法和最小二乘虚拟法,但这两种方法在算法处理时间间隔内采用的目标运动模型为匀速直线运动,比较适合目标速度恒定或缓慢变化的情况。In order to make up for the above shortcomings, time registration of sampled data has gradually become a research hotspot. In the research of time registration technology, there are many studies on the problem of mismatching of sampling data caused by different sampling periods, and some effective methods have been obtained. There are mainly interpolation and extrapolation methods, least squares virtual method, interpolation method, curve fitting method and serial combination method. In practical applications, the interpolation and extrapolation method and the least squares virtual method are commonly used, but the target motion model adopted by these two methods in the algorithm processing time interval is uniform linear motion, which is more suitable for constant or slow change of target speed. Case.
因此,本发明提出了一种基于数据插值的多传感器异步数据融合处理方法,该方法将时间同步与数据融合相结合,是一种改进的定位算法。相对于传统的滤波算法,本发明提出的融合方法的最大不同之处在于:基于数据插值的多传感器异步数据融合处理方法能够精准匹配相同时间戳的多传感器数据,而不是采用ros中传统的时间戳相近策略。该融合处理方法的优势是可以准确地匹配多传感器数据的时间戳,提升整体融合效果,从而提高车辆自主定位的精度。Therefore, the present invention proposes a multi-sensor asynchronous data fusion processing method based on data interpolation, which combines time synchronization and data fusion, and is an improved positioning algorithm. Compared with the traditional filtering algorithm, the biggest difference of the fusion method proposed by the present invention is that the multi-sensor asynchronous data fusion processing method based on data interpolation can accurately match the multi-sensor data with the same time stamp, instead of using the traditional time in ros Poke similar strategies. The advantage of this fusion processing method is that it can accurately match the time stamps of multi-sensor data, improve the overall fusion effect, and thereby improve the accuracy of vehicle autonomous positioning.
发明内容Contents of the invention
本发明提出了一种基于数据插值的多传感器异步数据融合处理方法。在传感器异步采样时刻情况下,通过对车载传感器原始观测量时间戳进行时间索引及数据插值的方法来得到尽可能地精确匹配相同时间戳的其他传感器数据。本发明通过改进现有的多传感器数据融合算法来提高车辆自主定位的精度。The invention proposes a multi-sensor asynchronous data fusion processing method based on data interpolation. In the case of sensor asynchronous sampling time, other sensor data that match the same time stamp as accurately as possible can be obtained by time indexing and data interpolation on the original observation time stamp of the vehicle sensor. The invention improves the accuracy of vehicle autonomous positioning by improving the existing multi-sensor data fusion algorithm.
该发明主要包括航迹推算模块、时间索引模块,数据插值模块和数据融合模块。在本发明中,我们假设每个观测值的噪声相互独立,在每个采样时刻,数据插值模块可以获取航迹推算模块和数据缓冲模块中的数据,以实现采样数据的插值。具体实现步骤如下:The invention mainly includes a dead reckoning module, a time index module, a data interpolation module and a data fusion module. In the present invention, we assume that the noise of each observed value is independent of each other. At each sampling moment, the data interpolation module can obtain the data in the dead reckoning module and the data buffer module to realize the interpolation of the sampled data. The specific implementation steps are as follows:
步骤一、针对先验估计状态进行航迹推算
假设智能车的工作区域是理想的水平二维环境,系统的状态向量为智能车的位姿,已知k时刻智能车的位姿为xk=(Xk,Yk,θk)T其中Xk,Yk表示智能车几何中心的位置,θk表示智能车在导航坐标系中的姿态。智能车的位姿从机载坐标系到导航坐标系的位姿变换矩阵为:Assuming that the working area of the smart car is an ideal horizontal two-dimensional environment, the state vector of the system is the pose of the smart car, and the pose of the smart car at time k is known as x k = (X k , Y k , θ k ) T where X k , Y k represent the position of the geometric center of the smart car, and θ k represents the attitude of the smart car in the navigation coordinate system. The pose transformation matrix of the pose of the smart car from the airborne coordinate system to the navigation coordinate system is:
为了逐步估计移动机器人的位姿,在EKF中,使用轮式里程计的数据做状态预测,使用IMU提供的姿态数据做测量更新。In order to gradually estimate the pose of the mobile robot, in the EKF, the data of the wheel odometer is used for state prediction, and the attitude data provided by the IMU is used for measurement update.
这里使用轮式里程计为预测步提供控制量uk+1=(VOdo,X,VOdo,Y,ωOdo),智能车的运动模型(即位姿参数更新方程)为:Here, the wheel odometer is used to provide the control quantity u k+1 = (V Odo,X ,V Odo,Y ,ω Odo ) for the prediction step, and the motion model of the smart car (that is, the pose parameter update equation) is:
根据运动模型预测k+1时刻的位姿估计值为:According to the motion model, the estimated value of the pose at time k+1 is:
式中表示由运动模型预测的k+1时刻的位姿估计值。其中,符号^表示估计值,符号-表示预测值。In the formula Indicates the pose estimate at time k+1 predicted by the motion model. Among them, the symbol ^ represents the estimated value, and the symbol - represents the predicted value.
预测状态向量先验估计值的协方差矩阵为:The covariance matrix of the prior estimate of the predicted state vector is:
运动模型的雅可比矩阵和运动噪声的雅可比矩阵分别为:The Jacobian matrix of the motion model and the Jacobian matrix of the motion noise are:
轮式里程计测量时,运动噪声典型值为wx=wy=0.1m/m,wθ=1°/m,所以When measuring the wheel odometer, the typical value of motion noise is w x =w y =0.1m/m, w θ =1°/m, so
步骤二、根据数据时间戳进行时间索引
在多传感器系统中,来自不同传感器系统的传感器数据是相对于参考时间基准进行时间校准的。参考时间值可以传送给传感器系统,以便使所述传感器系统基于所述的参考时间值标记传感器数据。可以通过对来自一个传感器系统的传感器数据应用校准策略来对该传感器数据进行时间校准。In a multi-sensor system, sensor data from different sensor systems are time-aligned with respect to a reference time reference. The reference time value may be communicated to the sensor system in order for the sensor system to tag sensor data based on said reference time value. Sensor data from a sensor system can be time-calibrated by applying a calibration strategy to the sensor data.
数据缓存模块将传感器数据按时间顺序独立的存放在各自的容器中,每个容器都相当于构建了一条时间线,首先我们需要选定一个核心传感器(里程计传感器),将核心传感器的数据时间戳作为参考时间。其次再对其他传感器的数据进行时间索引操作,即将里程计传感器采集时刻在其他传感器的时间线里找到对应的位置,并锁定前后帧的数据,以便继续进行数据插值处理。The data cache module stores sensor data independently in chronological order in their respective containers. Each container is equivalent to building a timeline. First, we need to select a core sensor (odometer sensor) and store the data time of the core sensor Stamp as reference time. Secondly, perform time indexing operation on the data of other sensors, that is, find the corresponding position in the timeline of other sensors at the acquisition time of the odometer sensor, and lock the data of the front and rear frames in order to continue the data interpolation process.
在遍历非核心传感器数据容器的数据时,理想情况就是让容器中第一个数据的时间戳比参考时间早,而第二个数据的时间戳比参考时间晚。当然,非核心传感器数据在进行时间索引时需要考虑一些异常情况,若非核心传感器数据出现丢帧或时间戳存在异常,可能会导致误差变大,甚至影响程序功能。因此,为了保证索引到的数据正确,我们还需要为它设定限制条件:When traversing the data in the non-core sensor data container, the ideal situation is to have the timestamp of the first data in the container be earlier than the reference time, and the timestamp of the second data in the container is later than the reference time. Of course, some abnormalities need to be considered when performing time indexing on non-core sensor data. If the non-core sensor data loses frames or the timestamp is abnormal, it may lead to larger errors and even affect program functions. Therefore, in order to ensure that the indexed data is correct, we also need to set restrictions for it:
条件一:若容器中第一个数据的时间戳晚于参考时间,则说明若在此处进行数据插值的话,无法获得参考时间的前一时刻的数据,即无从插入,应该退出遍历过程,选择核心传感器下一时刻的数据时间戳作为参考时间。Condition 1: If the timestamp of the first data in the container is later than the reference time, it means that if data interpolation is performed here, the data at the previous moment of the reference time cannot be obtained, that is, there is no way to insert, and the traversal process should be exited, select The data timestamp of the core sensor at the next moment is used as the reference time.
条件二:若满足容器中的第一个数据的时间戳早于参考时间这一限制条件,则必须保证容器中第二个数据的时间戳晚于参考时间,否则,第一个数据是无意义的,应继续遍历,并删除第一个数据。Condition 2: If the time stamp of the first data in the container is earlier than the reference time, the time stamp of the second data in the container must be later than the reference time, otherwise, the first data is meaningless , the traversal should continue and the first data should be deleted.
条件三:若满足条件一、条件二,但第一个数据的时间戳与参考时间的差值大于我们设定的阈值,则证明数据在该时刻可能存在数据丢帧的情况,应该退出遍历过程,选择核心传感器下一时刻的数据时间戳作为参考时间。Condition 3: If
条件四:若满足条件一、条件二、条件三,但第二个数据的时间戳与参考时间的差值大于我们设定的阈值,则证明数据在该时刻可能存在数据丢帧的情况,应该退出遍历过程,选择核心传感器下一时刻的数据时间戳作为参考时间。Condition 4: If
若遍历过程能同时满足上述四个限制条件,则可以认为完成了时间索引,应锁定该位置,以便进行数据插值。If the traversal process can meet the above four constraints at the same time, it can be considered that the time index is completed, and the position should be locked for data interpolation.
步骤三、根据相邻帧数据进行数据插值
步骤四完成了时间索引,并锁定位置后,数据插值模块基于球面矢量的四元数姿态插值法,利用该位置前后两帧的IMU数据实现姿态平滑插值。
假设初始姿态与结束姿态之间过渡转动的四元数qm,可由单位球面上垂直于过渡转动转轴的两个旋转向量表示,这两个旋转向量垂直于转轴夹角为转角θ,一个向量旋转到另一个向量时,在单位球面上形成一段圆弧,这两个向量及圆弧就表示过渡转动的四元数qm。均匀分配圆弧上的点,则可以得到均匀分配得到一系列过渡转动四元素与初始姿态叠加之后,得到一系列均匀的插值姿态 Assuming the quaternion q m of the transitional rotation between the initial posture and the final posture, two rotation vectors perpendicular to the transitional rotation axis on the unit sphere can be obtained Indicates that the two rotation vectors are perpendicular to the rotation axis The included angle is the rotation angle θ. When one vector rotates to another vector, a circular arc is formed on the unit sphere. These two vectors and the circular arc represent the quaternion q m of transitional rotation. Evenly distribute the points on the arc, then you can get a series of transition rotation four elements that are evenly distributed After being superimposed with the initial pose, a series of uniform interpolated poses are obtained
下面给出四元数姿态插值的具体步骤。The specific steps of quaternion attitude interpolation are given below.
步骤1、由给定的初始姿态与结束姿态得到过渡转动的转轴矢量与转角,相关方程为qm=qs -1·qe。
步骤2、由空间圆弧参数方程,在旋转圆弧上取一系列均匀的点,单位球面上的空间圆弧可表示为如下方程。
其中为过渡转动的转轴矢量,θ为转角,弧所在平面的法向矢量也就是初始姿态与结束姿态给定时,这些值是定值。t为空间圆弧参数,均匀选取参数t,可以得到圆弧上一系列均匀的点。in is the rotation axis vector of the transitional rotation, θ is the rotation angle, and the normal vector of the plane where the arc is located is When the initial attitude and end attitude are given, these values are fixed values. t is the parameter of the arc in space, if the parameter t is uniformly selected, a series of uniform points on the arc can be obtained.
步骤3、由圆弧起点向量为从圆弧起点到终点间均匀选取各点,得到一系列向量对过渡转动的四元数qm均匀化,即
并将转轴与向量正交的四元数的格式转换成非正交的普通格式。A quaternion with the rotation axis orthogonal to the vector format into a non-orthogonal common format.
步骤4、得到最终的一系列四元数姿态插值,即:
步骤四、搭建扩展卡尔曼滤波器进行数据融合
在智能车联网中,每辆智能车上都配备有惯性测量单元(IMU),我们使用多传感器模块来获取车载传感器的原始状态观测信息作为状态观测空间。k+1时刻IMU的姿态测量信息θk+1,IMU,此时观测模型为:In the smart car network, each smart car is equipped with an inertial measurement unit (IMU), and we use a multi-sensor module to obtain the original state observation information of the on-board sensors as a state observation space. The attitude measurement information θ k+1,IMU of the IMU at
Zk+1=θk+1+vk+1 (11)Z k+1 =θ k+1 +v k+1 (11)
在每一时刻,惯性测量单元和里程计都会返回对同一参数(姿态角)的复数观测量,针对这一情况,我们使用卡尔曼增益来计算一个对这两个传感器观测值不确定性的信赖值。k+1时刻的卡尔曼增益为:At each instant, the inertial measurement unit and the odometry return complex observations of the same parameter (attitude angle), for which we use the Kalman gain to compute a reliance on the uncertainty of the two sensor observations value. The Kalman gain at time k+1 is:
其中是一个单位矩阵,是观测噪声的协方差矩阵,一般由厂家给出精度或者通过实验统计进行噪声评估,这里先假设in is an identity matrix, which is the covariance matrix of the observation noise. Generally, the accuracy is given by the manufacturer or the noise is evaluated through experimental statistics. Here we first assume that
Rk+1=σθ (13)R k+1 = σ θ (13)
其中σθ表示IMU输出的关于姿态的观测噪声方差。where σ θ denotes the variance of the observation noise output by the IMU with respect to the pose.
所以:so:
状态变量的后验估计为:The posterior estimate of the state variable is:
状态变量后验估计值的协方差矩阵为:The covariance matrix of the posterior estimates of the state variables is:
附图说明Description of drawings
图1为基于数据插值的多传感器异步数据融合处理方法流程图。Fig. 1 is a flowchart of a multi-sensor asynchronous data fusion processing method based on data interpolation.
图2为移动机器人turtlebot3运动学模型。Figure 2 is the kinematics model of the mobile robot turtlebot3.
图3为无人驾驶仿真平台结构图。Figure 3 is a structural diagram of the unmanned driving simulation platform.
图4为未采取时间同步策略的融合算法轨迹图。Figure 4 is a trajectory diagram of the fusion algorithm without adopting the time synchronization strategy.
图5为采取不同时间同步策略的融合算法轨迹图。Figure 5 is a trajectory diagram of the fusion algorithm adopting different time synchronization strategies.
图6为各对照组偏航角变化对比图。Figure 6 is a comparison chart of yaw angle changes in each control group.
具体实施方式detailed description
以下将结合图例对本发明的基于数据插值的多传感器异步数据的融合处理方法做进一步的详细描述。The method for fusion processing of multi-sensor asynchronous data based on data interpolation of the present invention will be further described in detail below with reference to illustrations.
本发明的算法模型基于一定的假设,所述假设包括:The algorithmic model of the present invention is based on certain assumptions, and said assumptions include:
假设1:控制车辆均为智能车,均具有感知、计算、控制、通信等功能;Assumption 1: The controlled vehicles are all intelligent vehicles, which have functions such as perception, calculation, control, and communication;
假设2:假设车辆传感器的观测量相互独立;Assumption 2: Assume that the observations of the vehicle sensors are independent of each other;
假设3:假设每个观测量的噪声服从高斯分布;Hypothesis 3: Assume that the noise of each observation follows a Gaussian distribution;
本发明实例是通过移动机器人在室内模拟车辆的驾驶行为,仅依靠车载传感器来获取车辆位姿信息。通过设置实验对照组,并配合Matlab软件进行相关的编程计算,从而达到验证移动机器人定位系统性能的目的。The example of the present invention is to use the mobile robot to simulate the driving behavior of the vehicle indoors, and only rely on the vehicle sensor to obtain the vehicle pose information. By setting up the experimental control group and performing related programming calculations with Matlab software, the purpose of verifying the performance of the mobile robot positioning system is achieved.
步骤1、实验平台的搭建
为了模拟智能车联网中车辆正常驾驶的场景,本实例选用Turtlebot3小车搭建了无人驾驶仿真平台。Turtlebot3是TurtleBot系列中的第三代产品,它采用机器人智能驱动器Dynamixel驱动,是一款小型的、可编程的、基于ROS的高性价比移动机器人,其运动学模型如图2所示。Turtlebot3机器人配备了惯性测量单元(IMU)和里程计(Odom),通过内置系统功能包可以实时获取车辆的速度,角速度和加速度等状态信息。IMU的工作频率为100Hz,Odom的工作频率为10Hz。根据车辆的差速轮运动学模型,本实例选择了如下的割线模型作为移动机器人的运动学方程:In order to simulate the normal driving scene of the vehicle in the intelligent Internet of Vehicles, this example uses the Turtlebot3 car to build an unmanned driving simulation platform. Turtlebot3 is the third-generation product in the TurtleBot series. It is driven by the robot intelligent driver Dynamixel. It is a small, programmable, ROS-based cost-effective mobile robot. Its kinematic model is shown in Figure 2. The Turtlebot3 robot is equipped with an inertial measurement unit (IMU) and an odometer (Odom). Through the built-in system function package, the vehicle's speed, angular velocity, acceleration and other state information can be obtained in real time. The IMU operates at 100Hz and the Odom operates at 10Hz. According to the differential wheel kinematics model of the vehicle, this example selects the following secant model as the kinematics equation of the mobile robot:
其中,k为当前的时间戳,x,y为当前车辆的坐标,θ为当前车辆的航向角,Δt为相邻两时刻间的时间间隔。Among them, k is the current timestamp, x, y are the coordinates of the current vehicle, θ is the heading angle of the current vehicle, and Δt is the time interval between two adjacent moments.
步骤2、车辆状态估计
无人驾驶平台依赖ROS系统,通过编写对应的功能包,完成对于系统环境的实时监测和的驾驶行为控制,平台结构如图3所示。The unmanned driving platform relies on the ROS system, and completes real-time monitoring of the system environment and driving behavior control by writing the corresponding function package. The platform structure is shown in Figure 3.
如上文所述,本实例选用车辆位置、车辆姿态作为车辆状态量,所以估计向量X=[xyθ]T。As mentioned above, this example selects the vehicle position and vehicle attitude as the vehicle state quantity, so the estimated vector X=[xyθ] T .
为了得到各传感器的噪声信息,在实验开开始前,将车辆静置于场地,对其进行长时间,等间隔采样,并对数据进行计算分析,得到车辆各状态量如表1所示的结果。In order to obtain the noise information of each sensor, before the experiment started, the vehicle was placed statically in the field, and it was sampled at equal intervals for a long time, and the data was calculated and analyzed, and the results of each state quantity of the vehicle were shown in Table 1. .
表1噪声参数Table 1 Noise parameters
当使用里程计在车辆静止状态下获取速度信息时,里程计存在约为5*10-4m/s的固定噪声,使车辆以0.1m/s的恒定速度进行匀速运动,可以测得速度观测量误差的标准差约为3*10-3m/s。在对车辆位置进行估计时,将上述偏差值作为噪声的标准差带入状态估计器进行估计。When the odometer is used to obtain speed information when the vehicle is stationary, there is a fixed noise of about 5*10-4m/s in the odometer, so that the vehicle moves at a constant speed of 0.1m/s, and the speed observation can be measured The standard deviation of the error is about 3*10-3m/s. When estimating the vehicle position, the above deviation value is taken as the standard deviation of the noise into the state estimator for estimation.
步骤3、定位精度测试
为了验证基于轮式里程计和IMU融合定位算法的定位精度,在移动机器人ROS系统发布8个采样目标点a(2.000,0.000)、b(2.000,1.000)、c(0.000,1.000)、d(0.000,2.000)、e(2.000,2.000)、f(2.000,3.000)、g(1.000,3.000)、h(1.000,1.000),智能车从坐标系原点出发依次经过这八个目标点,各目标点之间运动轨迹由导航算法自主规划,采用目标点的定位精度可以估计运动过程的定位精度。各定位方法的轨迹如图4、图5所示,图4中,对照组①是未采取时间同步策略的融合定位算法的轨迹;odom轨迹至的是轮式里程计的轨迹。黑色轨迹为期望轨迹。图5中,对照组②是采取时间戳相近的时间同步策略的融合定位算法的轨迹,对照组③是采取数据插值的时间同步策略的融合定位算法的轨迹。不同轨迹中各点位具体坐标值如表2所示,采用标准差计算三种时间同步策略方式的定位精度,精度值如表3所示。In order to verify the positioning accuracy based on the wheel odometer and IMU fusion positioning algorithm, eight sampling target points a(2.000,0.000), b(2.000,1.000), c(0.000,1.000), d( 0.000,2.000), e(2.000,2.000), f(2.000,3.000), g(1.000,3.000), h(1.000,1.000), the smart car starts from the origin of the coordinate system and passes through these eight target points in turn, each target The movement trajectory between points is independently planned by the navigation algorithm, and the positioning accuracy of the target point can be used to estimate the positioning accuracy of the movement process. The trajectories of each positioning method are shown in Figure 4 and Figure 5. In Figure 4, the
表2不同轨迹中各点位的坐标值Table 2 Coordinate values of each point in different trajectories
表3各种时间同步策略下采样坐标点的精度Table 3 Accuracy of sampling coordinate points under various time synchronization strategies
步骤4、偏航角变化测试
为了验证本发明对姿态估计的有效性,记录步骤3中三组对照组的实验数据,并绘制偏航角变化对比曲线,结果如图6所示,各对照组的偏航角角度值如表三所示。可以看出在车辆运行的初始阶段,即走向点1的过程,由于不存在姿态角的变化,所以3组对照组的定位效果差别不大。此后车辆在转向过程中,姿态角发生了变化,对照组①不断累积误差,在点位3处偏航角误差已经超过10°,随着车辆的继续行驶,对照组①的偏航角逐渐失去意义,而对照组②、对照组③的偏航角数据值都可以较为准确地反应实际变化(图中的瞬间跳变是由于-180°到180°之间的微小变化造成),因此可以表明基于球面矢量的四元数姿态插值法对姿态融合的有效性。In order to verify the effectiveness of the present invention to attitude estimation, record the experimental data of three groups of control groups in
表3不同轨迹中各点位的偏航角值Table 3 Yaw angle values of each point in different trajectories
步骤5、实验结果
本发明提出了一种基于数据插值的多传感器数据融合方法,以解决车辆异步传感器数据融合的问题。我们将数据插值和位姿估计相结合,使得多传感器融合算法需要的数据和缓存区内的数据是相匹配时刻的数据。通过实验平台,验证了所提出的数据插值方法的有效性,实验结果表明了按照本发明提出的定位框架实现的定位算法具有精度高的优点。The invention proposes a multi-sensor data fusion method based on data interpolation to solve the problem of vehicle asynchronous sensor data fusion. We combine data interpolation and pose estimation, so that the data required by the multi-sensor fusion algorithm and the data in the buffer area are the data at the matching time. The effectiveness of the proposed data interpolation method is verified through an experimental platform, and the experimental results show that the positioning algorithm implemented according to the positioning framework proposed by the present invention has the advantage of high precision.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211263372.4A CN115577320A (en) | 2022-10-15 | 2022-10-15 | Multi-sensor asynchronous data fusion method based on data interpolation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211263372.4A CN115577320A (en) | 2022-10-15 | 2022-10-15 | Multi-sensor asynchronous data fusion method based on data interpolation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115577320A true CN115577320A (en) | 2023-01-06 |
Family
ID=84584184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211263372.4A Pending CN115577320A (en) | 2022-10-15 | 2022-10-15 | Multi-sensor asynchronous data fusion method based on data interpolation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115577320A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117034201A (en) * | 2023-10-08 | 2023-11-10 | 东营航空产业技术研究院 | Multi-source real-time data fusion method |
CN117490705A (en) * | 2023-12-27 | 2024-02-02 | 合众新能源汽车股份有限公司 | Vehicle navigation positioning method, system, device and computer readable medium |
-
2022
- 2022-10-15 CN CN202211263372.4A patent/CN115577320A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117034201A (en) * | 2023-10-08 | 2023-11-10 | 东营航空产业技术研究院 | Multi-source real-time data fusion method |
CN117490705A (en) * | 2023-12-27 | 2024-02-02 | 合众新能源汽车股份有限公司 | Vehicle navigation positioning method, system, device and computer readable medium |
CN117490705B (en) * | 2023-12-27 | 2024-03-22 | 合众新能源汽车股份有限公司 | Vehicle navigation positioning method, system, device and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Spica et al. | A real-time game theoretic planner for autonomous two-player drone racing | |
Wischnewski et al. | Vehicle dynamics state estimation and localization for high performance race cars | |
Kang et al. | Vins-vehicle: A tightly-coupled vehicle dynamics extension to visual-inertial state estimator | |
CN115577320A (en) | Multi-sensor asynchronous data fusion method based on data interpolation | |
US20240312061A1 (en) | High-precision odometry estimation method based on double-layer filtering framework | |
CN113819905B (en) | Mileage metering method and device based on multi-sensor fusion | |
CN111220153A (en) | Positioning method based on visual topological node and inertial navigation | |
CN108759822B (en) | Mobile robot 3D positioning system | |
Sanchez-Lopez et al. | Visual marker based multi-sensor fusion state estimation | |
CN114993285A (en) | Two-dimensional laser radar mapping method based on four-wheel omnidirectional all-wheel-drive mobile robot | |
Yin et al. | Combinatorial inertial guidance system for an automated guided vehicle | |
Zhang et al. | Self-positioning for mobile robot indoor navigation based on wheel odometry, inertia measurement unit and ultra wideband | |
CN115752507A (en) | Online single-steering-wheel AGV parameter calibration method and system based on two-dimensional code navigation | |
Feng et al. | Image-based trajectory tracking through unknown environments without absolute positioning | |
Azizi et al. | Mobile robot position determination using data from gyro and odometry | |
CN112389438A (en) | Method and device for determining transmission ratio of vehicle steering system | |
Housein et al. | Extended Kalman filter sensor fusion in practice for mobile robot localization | |
Sert et al. | Localizability of unicycle mobiles robots: An algebraic point of view | |
CN114638902B (en) | On-line estimation method for external parameters of vehicle-mounted camera | |
CN117075158A (en) | Position and orientation estimation method and system of unmanned deformable motion platform based on lidar | |
Song et al. | Gps-aided visual wheel odometry | |
Björnberg | Shared control for vehicle teleoperation with a virtual environment interface | |
Xu et al. | EKF-based positioning study of a mobile robot with McNamee wheels | |
Lin et al. | Realization of Ackermann robot obstacle avoidance navigation based on Multi-sensor fusion SLAM | |
Dahal et al. | Vehicle State Estimation Through Modular Factor Graph-Based Fusion of Multiple Sensors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |