WO2024001629A1 - 一种面向智能驾驶车辆的多传感器融合方法及系统 - Google Patents

一种面向智能驾驶车辆的多传感器融合方法及系统 Download PDF

Info

Publication number
WO2024001629A1
WO2024001629A1 PCT/CN2023/096478 CN2023096478W WO2024001629A1 WO 2024001629 A1 WO2024001629 A1 WO 2024001629A1 CN 2023096478 W CN2023096478 W CN 2023096478W WO 2024001629 A1 WO2024001629 A1 WO 2024001629A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
time
detected object
information
module
Prior art date
Application number
PCT/CN2023/096478
Other languages
English (en)
French (fr)
Inventor
石钧仁
高俊
朴昌浩
许林
何维晟
苗建国
李珂欣
苏永康
Original Assignee
重庆邮电大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 重庆邮电大学 filed Critical 重庆邮电大学
Publication of WO2024001629A1 publication Critical patent/WO2024001629A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the invention belongs to the technical field of intelligent driving vehicles, and specifically relates to a multi-sensor fusion method and system for intelligent driving vehicles.
  • the present invention proposes a multi-sensor fusion method and system for intelligent driving vehicles.
  • the method includes:
  • S1 Build an extended target tracker based on the GM-PHD algorithm and the rectangular target model of the detected object; use the extended target tracker to process the 2D detection information of the millimeter wave radar to obtain the millimeter wave radar track information of the detected object;
  • S2 Construct a bounding box detector and a JPDA tracker configured with IMM-UKF; use the bounding box detector and the JPDA tracker configured with IMM-UKF to process the 3D detection information of the lidar to obtain the lidar of the detected object track information;
  • S3 Use time-space conversion to process millimeter-wave radar track information and lidar track information to obtain the central fusion node; use the IMF algorithm to process the central fusion node to obtain global track information; implement integration based on the global track information Tracking of detected objects.
  • the process of building an extended goal tracker includes:
  • the GM-PHD algorithm is used to calculate the multi-target prediction PHD at k time and the multi-target posterior PHD at k time to obtain the extended target tracker.
  • represents the state vector of the detection object's expansion target
  • represents the detection object's expansion target.
  • the measurement rate state of , x represents the motion state of the extended target of the detected object, and
  • the process of constructing the bounding box detector includes: using a RANSAC plane fitting algorithm to preprocess the lidar data to obtain the target point cloud; using the Euclidean algorithm to cluster the target point cloud; according to the clustered
  • the target point cloud constructs the state vector of the bounding box detector, and then obtains the bounding box detector.
  • x′ represents the state vector
  • x represents the abscissa of the detection target
  • y represents the ordinate of the detection target
  • v represents the speed of the detection target
  • represents the direction angle of the detection target
  • represents the angular velocity of the detection target
  • z represents the detection target.
  • the vertical coordinate of represents the vertical speed of the detection target
  • L represents the length of the detection target
  • W represents the width of the detection target
  • H represents the height of the detection target.
  • the process of building a JPDA tracker configured with IMM-UKF includes:
  • the JPDA tracker configured with IMM-UKF consists of an input interaction module, a UKF filter module, a probability update module, a JPDA data association module and an output fusion module;
  • the input interaction module calculates the second state estimate and the second covariance matrix based on the first state estimate and the first covariance matrix of the UKF filter in the UKF filter module at time k and outputs them;
  • the UKF filter in the UKF filter module outputs the third state estimate and the third covariance matrix at time k+1 based on the output of the input interaction module and the effective observation vector at time k;
  • the probability update module calculates the conditional probability of the motion model at k+1 time based on the residual information of the UKF filter module;
  • the JPDA data association module uses the third state estimate, the third covariance matrix and the target in operation
  • the first measurement information under the moving model is used to calculate the second measurement information under the moving model at time k+1 of the target;
  • the output fusion module calculates the fused state estimate and covariance matrix based on the conditional probability of the k+1 time motion model, the second measurement information, the third state estimate, and the third covariance matrix.
  • the formula for processing the central fusion node using the IMF algorithm includes:
  • k) represents the global covariance of the sensor from time 0 to k
  • k-1) represents the global covariance of the sensor from time 0 to k-1
  • k) represents The local covariance of the i-th sensor from time 0 to k
  • k-1) represents the local covariance of the i-th sensor from time 0 to k-1
  • N k represents the number of sensors
  • a multi-sensor fusion system for intelligent driving vehicles which is used to perform a multi-sensor fusion method for intelligent driving vehicles, including: an extended target tracking module, a boundary detector module, a point target tracking module and a track fusion module;
  • the extended target tracking module is used to process the 2D detection information of the millimeter wave radar according to the GM-PHD algorithm and the rectangular target model of the detected object, and obtain the millimeter wave radar of the detected object. Reach track information;
  • the bounding box detector module is used to process the 3D detection information of the lidar based on the RANSAC plane fitting algorithm and the Euclidean algorithm to obtain the 3D information of the detected object;
  • the point target tracking module is used to process the 3D information of the detected object using a JPDA tracker equipped with IMM-UKF to obtain the lidar track information of the detected object;
  • the track fusion module is used to fuse the millimeter wave radar track information of the detected object and the lidar track information of the detected object to obtain global track information.
  • the beneficial effects of the present invention are: for the 2-dimensional detection information of millimeter wave radar, the present invention uses the GM-PHD algorithm to construct an extended target tracker to track the trajectory of the detected object; for the 3-dimensional point cloud information of the laser radar, based on RANSAC
  • the algorithm preprocesses lidar data to remove redundant point clouds, builds a bounding box detector based on the Euclidean distance clustering algorithm, and further builds a JPDA tracker configured with IMM-UKF to achieve navigation of the detected objects. Tracking of tracks solves the combinatorial explosion problem introduced by the data association method; constructs a state space for track fusion based on the state space of lidar, and completes the conversion from millimeter wave radar state space to track state space.
  • the tracks of lidar and millimeter wave radar are fused to form a global track, which solves the timing problem caused by the misordering of local track information of different sensors; compared with the existing technology, the present invention can improve the overall perception accuracy and reduce Bandwidth requirements and computational costs are reduced, and when a sensor fails or fails, another sensor can be used to compensate, which is highly practical.
  • Figure 1 is a flow chart of the multi-sensor fusion method for intelligent driving vehicles in the present invention
  • Figure 2 is a schematic diagram of the rectangular target model in the present invention.
  • FIG. 3 is a schematic diagram of the JPDA tracker framework configured with IMM-UKF in the present invention
  • Figure 4 is a schematic diagram of the track fusion update process in the present invention.
  • the present invention proposes a multi-sensor fusion method for intelligent driving vehicles, as shown in Figure 1.
  • the method includes the following:
  • S1 Construct an extended target tracker based on the Gaussian mixture probability hypothesis density (GM-PHD) algorithm and the rectangular target model of the detected object; use the extended target tracker to process the 2D detection information of millimeter wave radar, Obtain the millimeter wave radar track information of the detected object;
  • GM-PHD Gaussian mixture probability hypothesis density
  • Millimeter wave radar can easily detect a single detection target into multiple targets.
  • the present invention builds an extended target tracker based on the GM-PHD algorithm and the rectangular target model of the detection object to achieve detection of the detection object, that is, the intelligent driving vehicle. Tracking, the process of building an extended goal tracker includes:
  • the rectangular target model of the detected object as shown in Figure 2.
  • represents the state vector of the extended target of the detected object
  • represents the measurement rate state of the extended target of the detected object, which is a scalar that obeys the gamma distribution
  • x represents the motion state of the extended target of the detected object
  • [x, y] T represents the two-dimensional coordinate position information of the detection target
  • v, ⁇ , and ⁇ respectively represent the speed and direction angle of the detection target.
  • angular velocity, L and W represent the length and width of the detection target respectively; A rectangle that approximates the scaling state of this scaling target.
  • the GM-PHD filter is constructed according to the rectangular extended target state of the detected object; the GM-PHD filter approximates the multi-target probability hypothesis density through the weighted Gaussian component. It is assumed that the multi-target posterior PHD at time k-1 can be expressed in the form of a Gaussian mixture as :
  • J k-1 is the number of Gaussian components at time k-1
  • N( ⁇ ;m,P) represents the probability density function of Gaussian distribution
  • the mean is The covariance
  • Multi-objective prediction PHD at time k and multi-objective posterior PHD at time k can also be expressed in Gaussian mixture form. They are:
  • x represents the motion state of the rectangular extended target state of the detected object
  • k-1 (x) represents Multi-target prediction PHD at time k
  • k-1 (x) represents the PHD of the surviving Gaussian component at time k
  • ⁇ k (x) represents the new target at time k, which refers to the PHD of the new observation point obtained by the sensor.
  • the covariance is The Gaussian component of When there is a signal, due to noise, two judgment results may be obtained: "there is a signal” or "no signal”.
  • the probability of making the correct judgment of "there is a signal" represents the weight of the j-th update component, z k represents the target measurement value at time k, which refers to the target coordinates and other physical information values obtained by the radar; means that the mean is The covariance is Gaussian component of .
  • the construction of the GM-PHD filter is completed by predicting D k
  • the extended target tracker is used to process the 2-dimensional detection information of millimeter wave radar, and the millimeter wave radar track information of the detected object can be obtained.
  • JPDA Joint Probabilistic Data Association
  • S2 Construct a bounding box detector and a JPDA tracker configured with IMM-UKF; use the bounding box detector and the JPDA tracker configured with IMM-UKF to process the 3D detection information of the lidar to obtain the lidar of the detected object Track information.
  • LiDAR can obtain far more measurement information for each target than millimeter-wave radar, and it can also obtain ground measurement information, and the amount of data it outputs is not of the same order of magnitude as millimeter-wave radar. If the extended target tracker is directly used for processing, the output signal will be seriously lagging due to excessive computational complexity.
  • the present invention builds a bounding box detector to process the 3D detection information of lidar.
  • the process of building a bounding box detector includes:
  • a plane fitting algorithm based on Random Sample Consensus is used to preprocess lidar data to remove redundant point cloud information such as the ground, and then obtain the target point cloud and reduce computational overhead; specifically, the first choice Randomly select three points in the initial point cloud and calculate the sampling plane formed by the three points; secondly, calculate the distance from all point clouds to the sampling plane, set a distance threshold, and divide many point clouds into interior points (normal data) and external points (abnormal data); then, count the number of internal points and external points, repeat the above process to the maximum number of iterations, and select the plane with the most internal points; finally, based on the plane with the most internal points, re-model it with all its internal points Combine the planes to obtain the final fitting plane equation. Remove redundant point clouds such as ground points according to the final fitting plane equation to obtain the target point cloud.
  • RANSAC Random Sample Consensus
  • the point clouds belonging to various main targets will be presented in a floating and separated state in the space.
  • the detected objects use a cuboid bounding box detector.
  • the Euclidean algorithm is used to cluster the target point cloud in this state. According to the clustering results, the state vector of the bounding box detector is obtained.
  • the state vector of the bounding box detector is expressed as:
  • x′ represents the state vector.
  • the state vector of the cuboid bounding box detector has more z, and H three variables, z represents the vertical coordinate of the detection target, represents the vertical speed of the detection target, and H represents the height of the detection target.
  • a joint probabilistic data association (JPDA) tracker configured with an interactive multi-model unscented Kalman filter (IMM-UKF) is constructed, that is, a point target
  • the process of the tracker includes:
  • the JPDA tracker configured with IMM-UKF is composed of an input interaction module, a UKF filter module, a probability update module, a JPDA data association module and an output fusion module, which is used to realize the trajectory of objects detected by lidar. track;
  • the UKF filter obtains the first state estimate based on the state vector x′ of the bounding box detector at time k.
  • the UKF filter in the UKF filter module outputs the third state estimate at time k+1 based on the output of the input interaction module and the effective observation vector Z k at time k. and the third covariance matrix
  • the probability update module calculates the conditional probability of motion model j at time k+1 based on the residual information of the UKF filter module. in is the residual information of the UKF filter;
  • the JPDA data association module is based on the first measurement information of the target under the motion model. and associated probability (third state estimate and third covariance matrix), calculate the second measurement information of target k+1 under motion model j Among them, the first measurement information is measurement information such as target vehicle speed;
  • the output fusion module outputs the conditional probability of motion model j at k+1 moment second measurement information third state estimate and the third covariance matrix Calculate the fused state estimate and covariance matrix. Even if the estimation error is small, the UKF filter output becomes the fused tracking output, and then the fused state estimate is obtained. and covariance matrix P(k+1).
  • a 3-D local tracker suitable for lidar can be obtained, that is, a JPDA tracker equipped with IMM-UKF.
  • the bounding box detector and the JPDA tracker equipped with IMM-UKF are used to process the 3D detection information of the lidar, and the lidar track information of the detected object can be obtained.
  • This method can reduce computational overhead without increasing hardware costs, thereby effectively improving the output lag of lidar track information.
  • S3 Use time-space conversion to process millimeter-wave radar track information and lidar track information to obtain the central fusion node; use the IMF algorithm to process the central fusion node to obtain global track information; implement integration based on the global track information Tracking of detected objects.
  • the fusion center node uses the Information Matrix Fusion (IMF) algorithm to update the 2-D local track and 3-D local track.
  • IMF Information Matrix Fusion
  • the track update process is shown in Figure 4.
  • the global track information only contains from 0 to k i -1 2-D local track information at time.
  • the global track information combines the existing information with the 3-D local track information from 0 to k j time.
  • the global track information contains the 2-D local track information from 0 to k j time. and 3-D local track information.
  • the global track information is directly fused with the 2-D local track information at time k i , it will cause repeated fusion of information. Because in addition to the really needed 2-D local track information from k i -1 to k i , the above fused information also has repeated 2-D local track information from 0 to k i -1.
  • the IMF algorithm can avoid repeated fusion of track information during the update process. It updates the fused track data by updating the covariance and state estimates, preventing old data from entering new targets and thus avoiding repeated fusion.
  • the algorithm flow is as follows :
  • k) represents the global covariance of the sensor from time 0 to k, corresponding to the global track information
  • k-1) represents the global covariance of the sensor from time 0 to k-1
  • k) represents the local covariance of the i-th sensor from time 0 to k, corresponding to local track information
  • k-1) represents the local covariance of the i-th sensor from time 0 to k-1.
  • N k represents the number of sensors, Represents the global state estimate of the sensor from time 0 to k, Represents the global state estimate of the sensor from time 0 to k-1, Represents the local state estimate of the i-th sensor from time 0 to k, Represents the local state estimate of the i-th sensor from time 0 to k-1; where the sensor refers to millimeter wave radar or lidar.
  • the IMF algorithm is used to fuse two local tracks to obtain global track information. Users can track intelligent driving vehicles based on the global track information.
  • the invention also provides a multi-sensor fusion system for intelligent driving vehicles.
  • the system is used to execute a multi-sensor fusion method for intelligent driving vehicles, including: an extended target tracking module, a boundary detector module, a point target tracking module and a track. Fusion module;
  • the extended target tracking module is used to process the 2-dimensional detection information of the millimeter wave radar according to the GM-PHD algorithm and the rectangular target model of the detected object, and obtain the millimeter wave radar track information of the detected object;
  • the bounding box detector module is used to process the 3D detection information of the lidar based on the RANSAC plane fitting algorithm and the Euclidean algorithm to obtain the 3D information of the detected object;
  • the point target tracking module is used to process the 3D information of the detected object using a JPDA tracker equipped with IMM-UKF to obtain the lidar track information of the detected object;
  • the track fusion module is used to fuse the millimeter wave radar track information of the detected object and the lidar track information of the detected object to obtain global track information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

一种面向智能驾驶车辆的多传感器融合方法及系统,属于智能驾驶车辆技术领域;该方法包括:根据GM-PHD算法和探测物体的矩形目标模型构建扩展目标跟踪器;采用扩展目标跟踪器对毫米波雷达的探测信息进行处理,得到探测物体的毫米波雷达航迹信息;采用构建的边界框探测器和配置有IMM-UKF的JPDA跟踪器对激光雷达的探测信息进行处理,得到探测物体的激光雷达航迹信息;采用时空间转换将毫米波雷达航迹信息和激光雷达航迹信息进行处理,得到中心融合节点;采用IMF算法对中心融合节点进行处理,得到全局航迹信息。

Description

一种面向智能驾驶车辆的多传感器融合方法及系统
本申请要求于2022年07月01日提交中国专利局、申请号为202210768001.5、发明名称为“一种面向智能驾驶车辆的多传感器融合方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明属于智能驾驶车辆技术领域,具体涉及一种面向智能驾驶车辆的多传感器融合方法及系统。
背景技术
近年来随着汽车智能化的不断演进,智能驾驶车辆技术受到越来越多的关注。感知认知是智能驾驶车辆的关键所在,它为驾驶过程的决策过程提供判断依据,其性能的优劣直接关系到整车的控制效果。
常见的传感器有基于毫米波雷达的和基于激光雷达的;基于毫米波雷达的传感器,由于毫米波雷达航向角偏差较大,横向位置偏差大且无法获得目标体积信息。基于激光雷达的传感器在雨、雪天气下感知精度受到极大的干扰,同时激光雷达无法直接获得目标的速度信息,因此对目标速度变化不够敏感。
由于配置有单传感器的智能驾驶车辆已经无法满足复杂的驾驶环境,为提高整车的感知认知能力,多传感器配置已成为智能驾驶车辆标配。但是随着检测目标的数目变化以及不同传感器量测信息的差异,给多传感器融合又带来了新的挑战。一方面,随着传感器技术的提升,扩展目标可以占据传感器多个分辨率单元,会给数据关联方法引入组合爆炸问题;另一 方面,由于毫米波雷达和激光雷达测量得到的数据在传递过程中可能存在处理时间延迟和通信延迟,进而引发由局部航迹信息错序导致的时序问题。
发明内容
针对现有技术存在的不足,本发明提出了一种面向智能驾驶车辆的多传感器融合方法及系统,该方法包括:
S1:根据GM-PHD算法和探测物体的矩形目标模型构建扩展目标跟踪器;采用扩展目标跟踪器对毫米波雷达的2维探测信息进行处理,得到探测物体的毫米波雷达航迹信息;
S2:构建边界框探测器和配置有IMM-UKF的JPDA跟踪器;采用边界框探测器和配置有IMM-UKF的JPDA跟踪器对激光雷达的3维探测信息进行处理,得到探测物体的激光雷达航迹信息;
S3:采用时空间转换将毫米波雷达航迹信息和激光雷达航迹信息进行处理,得到中心融合节点;采用IMF算法对中心融合节点进行处理,得到全局航迹信息;根据全局航迹信息实现对探测物体的跟踪。
优选的,构建扩展目标跟踪器的过程包括:
根据探测物体的矩形目标模型得到探测物体的矩形扩展目标状态;
根据探测物体的矩形扩展目标状态,采用GM-PHD算法计算k时刻多目标预测PHD和k时刻多目标后验PHD,得到扩展目标跟踪器。
进一步的,矩形扩展目标状态表示为:
ξ=(γ,x,X)
其中,ξ表示探测物体扩展目标的状态向量,γ表示探测物体扩展目标 的量测率状态,x表示探测物体扩展目标的运动状态,X表示探测物体扩展目标的扩展状态。
优选的,构建边界框探测器的过程包括:采用基于RANSAC平面拟合算法对激光雷达数据进行预处理,得到目标点云;采用欧几里得算法对目标点云进行聚类;根据聚类的目标点云构建边界框探测器的状态向量,进而得到边界框探测器。
进一步的,边界框探测器的状态向量为:
其中,x′表示状态向量,x表示探测目标的横坐标,y表示探测目标的纵坐标,v表示探测目标的速度,θ表示探测目标的方向角,ω表示探测目标的角速度,z表示探测目标的垂向坐标,表示探测目标的垂向速度,L表示探测目标的长度,W表示探测目标的宽度,H表示探测目标的高度。
优选的,构建配置有IMM-UKF的JPDA跟踪器的过程包括:
配置有IMM-UKF的JPDA跟踪器由输入交互模块、UKF滤波模块、概率更新模块、JPDA数据关联模块以及输出融合模块构成;
输入交互模块根据UKF滤波模块中的UKF滤波器在k时刻的第一状态估计和第一协方差矩阵计算第二状态估计和第二协方差矩阵并输出;
UKF滤波模块中的UKF滤波器根据输入交互模块的输出和k时刻的有效观测向量,输出k+1时刻的第三状态估计和第三协方差矩阵;
概率更新模块根据UKF滤波模块的残差信息,计算k+1时刻运动模型的条件概率;
JPDA数据关联模块根据第三状态估计、第三协方差矩阵和目标在运 动模型下的第一测量信息,计算目标k+1时刻在运动模型下的第二测量信息;
输出融合模块根据k+1时刻运动模型的条件概率、第二测量信息、第三状态估计和第三协方差矩阵计算融合后的状态估计和协方差矩阵。
优选的,采用IMF算法对中心融合节点进行处理的公式包括:
更新协方差:
更新状态估计:
其中,P(k|k)表示传感器从0到k时刻的全局协方差,P(k|k-1)表示传感器从0到k-1时刻的全局协方差,Pi(k|k)表示第i个传感器从0到k时刻的局部协方差,Pi(k|k-1)表示第i个传感器从0到k-1时刻的局部协方差,Nk表示传感器数量,表示传感器从0到k时刻的全局状态估计,表示传感器从0到k-1时刻的全局状态估计,表示第i个传感器从0到k时刻的局部状态估计,表示第i个传感器从0到k-1时刻的局部状态估计。
一种面向智能驾驶车辆的多传感器融合系统,该系统用于执行面向智能驾驶车辆的多传感器融合方法,包括:扩展目标跟踪模块、边界探测器模块、点目标跟踪模块和航迹融合模块;
所述扩展目标跟踪模块用于根据GM-PHD算法和探测物体的矩形目标模型对毫米波雷达的2维探测信息进行处理,得到探测物体的毫米波雷 达航迹信息;
所述边界框探测器模块用于根据基于RANSAC平面拟合算法和欧几里得算法对激光雷达的3维探测信息进行处理,得到探测物体的3维信息;
所述点目标跟踪模块用于,采用配置有IMM-UKF的JPDA跟踪器对探测物体的3维信息进行处理,得到探测物体的激光雷达航迹信息;
所述航迹融合模块用于融合探测物体的毫米波雷达航迹信息和探测物体的激光雷达航迹信息,得到全局航迹信息。
本发明的有益效果为:本发明针对毫米波雷达2维探测信息,采用GM-PHD算法构建扩展目标跟踪器,以实现对探测物体航迹的跟踪;针对激光雷达3维点云信息,基于RANSAC算法对激光雷达数据进行预处理以移除多余点云,基于欧几里得距离聚类算法构建边界框探测器,进一步,构建配置有IMM-UKF的JPDA跟踪器,以实现对所探测物体航迹的跟踪,解决了数据关联方法引入的组合爆炸问题;基于激光雷达的状态空间构建航迹融合的状态空间,并完成毫米波雷达状态空间到航迹状态空间的转换,进一步,基于IMF算法实现对激光雷达和毫米波雷达的航迹融合,进而形成全局航迹,解决了由不同传感器局部航迹信息错序导致的时序问题;与现有技术相比,本发明可提升整体感知精度,降低带宽需求和计算成本,并且当某一传感器出错或者失效时可以使用另一传感器进行补偿,实用性高。
附图说明
图1为本发明中面向智能驾驶车辆的多传感器融合方法的流程图;
图2为本发明中矩形目标模型示意图;
图3为本发明中配置有IMM-UKF的JPDA跟踪器框架示意图;
图4为本发明中航迹融合更新过程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明提出了一种面向智能驾驶车辆的多传感器融合方法,如图1所示,所述方法包括以下内容:
S1:根据高斯混合概率假设密度(Gaussian mixture probability hypothesis density,GM-PHD)算法和探测物体的矩形目标模型构建扩展目标跟踪器;采用扩展目标跟踪器对毫米波雷达的2维探测信息进行处理,得到探测物体的毫米波雷达航迹信息;
毫米波雷达极易将单个检测目标检测为多个目标,为解决此问题,本发明根据GM-PHD算法和探测物体的矩形目标模型构建扩展目标跟踪器,以实现对探测物体即智能驾驶车辆的跟踪,构建扩展目标跟踪器的过程包括:
获取探测物体的矩形目标模型,如图2所示,根据探测物体的矩形目标模型得到探测物体的矩形扩展目标状态;矩形扩展目标状态表示为:
ξ=(γ,x,X)
其中,ξ表示探测物体扩展目标的状态向量,γ表示探测物体扩展目标的量测率状态,它是一种服从伽马分布的标量;x表示探测物体扩展目标的运动状态,向量x可建模为x=[x,y,v,θ,ω,L,W]T,[x,y]T表示探测目标的二维坐标位置信息,v,θ,ω分别表示探测目标的速度、方向角和角速度,L和W分别表示探测目标的长度和宽度;X表示探测物体扩展目标扩展状态,即单个量测信息扩展之后的状态,当扩展目标量测杂乱无序分布在一定空间,通常可以用一个矩形来近似描述该扩展目标的扩展状态。
根据探测物体的矩形扩展目标状态构建GM-PHD滤波器;GM-PHD滤波器通过带权值的高斯分量近似多目标概率假设密度,假设k-1时刻多目标后验PHD可用高斯混合形式表示为:
其中,Jk-1为k-1时刻高斯分量的数量,为第i个高斯分量权值,N(·;m,P)表示高斯分布的概率密度函数,为k-1时刻第i个高斯分布的概率密度函数,均值为协方差为
k时刻多目标预测PHD和k时刻多目标后验PHD也可用高斯混合形式表示,它们分别为:

其中,x表示探测物体的矩形扩展目标状态的运动状态,Dk|k-1(x)表示 k时刻多目标预测PHD,DS,k|k-1(x)表示k时刻存活高斯分量的PHD,γk(x)表示k时刻新生目标即指传感器获取到的新观测点PHD,表示k时刻第i个高斯分量的权值;表示均值为协方差为的高斯分量,Jk|k-1表示预测k时刻高斯分量的数量,Dk|k(x)表示k时刻后验多目标PHD;pD,k表示目标检测概率,指当雷达输入端确实有信号时,由于噪声原因,可能得出“有信号”或“无信号”两种判决结果,当有信号,并作出“有信号”的正确判决的概率;表示第j个更新分量的权值,zk表示k时刻目标量测值,指雷达获取到的目标坐标等物理信息值;表示均值为协方差为的高斯分量。
通过GM-PHD算法预测Dk|k-1(x)和更新Dk|k(x)完成GM-PHD滤波器的构建,进而得到适用于毫米波雷达的2-D局部跟踪器即扩展目标跟踪器。
采用扩展目标跟踪器对毫米波雷达的2维探测信息进行处理,即可得到探测物体的毫米波雷达航迹信息。
联合概率数据关联(JPDA)等传统跟踪器普遍假设传感器每次扫描单个目标最多输出一个量测信息,进而导致配置有传统跟踪器的高分辨率传感器在进行数据关联输出量测信息之前必须进行聚类处理,这不但增加了计算成本,而且严重影响测量精度。采用GM-PHD算法和矩形目标模型构建扩展目标跟踪器,可以同时处理聚类和数据关联,不但可以降低计算开销,而且能够提供测量精度。
S2:构建边界框探测器和配置有IMM-UKF的JPDA跟踪器;采用边界框探测器和配置有IMM-UKF的JPDA跟踪器对激光雷达的3维探测信息进行处理,得到探测物体的激光雷达航迹信息。
激光雷达测量得到每个目标的量测信息数量远远多于毫米波雷达,而且它还会得到地面量测信息,其输出的数据量相较毫米波雷达不是同一个数量级。若直接采用扩展目标跟踪器进行处理,会由于计算复杂度过高导致输出信号严重滞后。鉴于此,本发明构建边界框探测器对激光雷达的3维探测信息进行处理,构建边界框探测器的过程包括:
采用基于随机抽样一致性(Random Sample Consensus,RANSAC)平面拟合算法对激光雷达数据进行预处理,以移除地面等多余点云信息,进而得到目标点云,减小计算开销;具体的,首选在初始点云中随机挑选三个点,并计算三点形成的采样平面;其次,计算所有点云至该采样平面的距离,设置距离阈值,并根据阈值将诸多点云分为内点(正常数据)和外点(异常数据);然后,统计内点和外点数量,重复上述过程至最大迭代次数,选取内点最多平面;最后,基于内点最多平面,以它的所有内点重新拟合平面,获取最终拟合平面方程,根据最终拟合平面方程移除地面点等多余点云,得到目标点云。
将多余点云移除后,属于各类主要目标的点云在空间中会以浮空分离的状态呈现,探测物体采用长方体边界框探测器。首先使用欧几里得算法进行该状态下目标点云的聚类,根据聚类结果得到边界框探测器的状态向量,边界框探测器的状态向量表示为:
其中,x′表示状态向量,相较于毫米波雷达的矩形模型状态向量,长方体边界框探测器的状态向量多了z、和H三个变量,z表示探测目标的垂向坐标,表示探测目标的垂向速度,H表示探测目标的高度。
如图3所示,构建配置有交互式多模型无迹卡尔曼滤波器(Interacting Multiple Model-Unscented Kalman Filter,IMM-UKF)的联合概率数据关联(Joint Probabilistic Data Association,JPDA)跟踪器即点目标跟踪器的过程包括:配置有IMM-UKF的JPDA跟踪器由输入交互模块、UKF滤波模块、概率更新模块、JPDA数据关联模块以及输出融合模块构成,用于实现对激光雷达所探测物体的航迹跟踪;
UKF滤波器在k时刻基于边界框探测器的状态向量x′得到第一状态估计
输入交互模块根据UKF滤波模块中的UKF滤波器在k时刻的第一状态估计和第一协方差矩阵计算多个目标交互后的第二状态估计和第二协方差矩阵并输出,其中j表示运动模型,它是根据目标的运动状态构建的速度模型,i=1,2,…,r,r为UKF滤波器数量;
UKF滤波模块中的UKF滤波器根据输入交互模块的输出和k时刻的有效观测向量Zk,输出k+1时刻的第三状态估计和第三协方差矩阵
概率更新模块根据UKF滤波模块的残差信息,计算k+1时刻运动模型j的条件概率其中为UKF滤波器的残差信息;
JPDA数据关联模块根据目标在运动模型下的第一测量信息和关联概率(第三状态估计和第三协方差矩阵),计算目标k+1时刻在运动模型j下的第二测量信息其中,第一次测量信息为目标车速等测量信息;
输出融合模块根据k+1时刻运动模型j的条件概率第二测量 信息第三状态估计和第三协方差矩阵计算融合后的状态估计和协方差矩阵,即使估计误差小的UKF滤波器输出成为融合跟踪输出,进而得到融合后的状态估计和协方差矩阵P(k+1)。
经过上述步骤,可得到适用于激光雷达的3-D局部跟踪器即配置有IMM-UKF的JPDA跟踪器。
采用边界框探测器和配置有IMM-UKF的JPDA跟踪器对激光雷达的3维探测信息进行处理,可得到探测物体的激光雷达航迹信息。该方法可以在不增加硬件成本的条件下,减小计算开销,进而有效改善激光雷达航迹信息的输出滞后现象。
S3:采用时空间转换将毫米波雷达航迹信息和激光雷达航迹信息进行处理,得到中心融合节点;采用IMF算法对中心融合节点进行处理,得到全局航迹信息;根据全局航迹信息实现对探测物体的跟踪。
毫米波雷达和激光雷达的测量信息分别经2-D局部跟踪器和3-D局部跟踪器处理后,形成两个局部航迹,即探测物体的毫米波雷达航迹信息和探测物体的毫米波雷达航迹信息;采用时空间转换将局部航迹信息统一至相同的坐标系及时间节点,进而构建融合中心节点,以实现不同航迹信息的数据关联。由于毫米波雷达和激光雷达测量得到的数据在传递过程中可能存在延迟处理时间和通信延迟,进而引发局部航迹信息错序导致的时序问题。为了解决该问题,需要将两个传感器之间的协方差计算出来,然后减去重复值。鉴于此,融合中心节点采用基于信息矩阵融合(Information Matrix Fusion,IMF)算法实现对2-D局部航迹和3-D局部航迹的更新。
航迹更新过程如图4所示。在k-2时刻,全局航迹信息仅含有从0到ki-1 时刻的2-D局部航迹信息。在k-1时刻,全局航迹信息在已有信息基础上又融合了从0到kj时刻的3-D局部航迹信息,换言之,全局航迹信息含有0到kj时刻的2-D和3-D局部航迹信息。然而,在k时刻,若全局航迹信息直接融合ki时刻的2-D局部航迹信息,会造成信息的重复融合。因为上述被融合信息除了真正需要的ki-1到ki时刻的2-D局部航迹信息外,还有重复的0到ki-1时刻的2-D局部航迹信息。
IMF算法可避免航迹信息的在更新过程中重复融合,通过对协方差和状态估计的更新实现对融合后的航迹数据的更新,避免旧数据进入新目标从而避免重复融合,其算法流程如下:
更新协方差:
更新状态估计:
其中,P(k|k)表示传感器从0到k时刻的全局协方差,对应全局航迹信息;P(k|k-1)表示传感器从0到k-1时刻的全局协方差;Pi(k|k)表示第i个传感器从0到k时刻的局部协方差,对应局部航迹信息;Pi(k|k-1)表示第i个传感器从0到k-1时刻的局部协方差,Nk表示传感器数量,表示传感器从0到k时刻的全局状态估计,表示传感器从0到k-1时刻的全局状态估计,表示第i个传感器从0到k时刻的局部状态估计,表示第i个传感器从0到k-1时刻的局部状态估计;其中,传感器指毫米波雷达或激光雷达。
采用IMF算法融合两局部航迹,得到全局航迹信息,用户可根据全局航迹信息实现对智能驾驶车辆的跟踪。
本发明还提供一种面向智能驾驶车辆的多传感器融合系统,该系统用于执行面向智能驾驶车辆的多传感器融合方法,包括:扩展目标跟踪模块、边界探测器模块、点目标跟踪模块和航迹融合模块;
所述扩展目标跟踪模块用于根据GM-PHD算法和探测物体的矩形目标模型对毫米波雷达的2维探测信息进行处理,得到探测物体的毫米波雷达航迹信息;
所述边界框探测器模块用于根据基于RANSAC平面拟合算法和欧几里得算法对激光雷达的3维探测信息进行处理,得到探测物体的3维信息;
所述点目标跟踪模块用于,采用配置有IMM-UKF的JPDA跟踪器对对探测物体的3维信息进行处理,得到探测物体的激光雷达航迹信息;
所述航迹融合模块用于融合探测物体的毫米波雷达航迹信息和探测物体的激光雷达航迹信息,得到全局航迹信息。
以上所举实施例,对本发明的目的、技术方案和优点进行了进一步的详细说明,所应理解的是,以上所举实施例仅为本发明的优选实施方式而已,并不用以限制本发明,凡在本发明的精神和原则之内对本发明所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种面向智能驾驶车辆的多传感器融合方法,其特征在于,包括:
    S1:根据GM-PHD算法和探测物体的矩形目标模型构建扩展目标跟踪器;采用扩展目标跟踪器对毫米波雷达的2维探测信息进行处理,得到探测物体的毫米波雷达航迹信息;
    S2:构建边界框探测器和配置有IMM-UKF的JPDA跟踪器;采用边界框探测器和配置有IMM-UKF的JPDA跟踪器对激光雷达的3维探测信息进行处理,得到探测物体的激光雷达航迹信息;
    S3:采用时空间转换将毫米波雷达航迹信息和激光雷达航迹信息进行处理,得到中心融合节点;采用IMF算法对中心融合节点进行处理,得到全局航迹信息;根据全局航迹信息实现对探测物体的跟踪。
  2. 根据权利要求1所述的一种面向智能驾驶车辆的多传感器融合方法,其特征在于,构建扩展目标跟踪器的过程包括:
    根据探测物体的矩形目标模型得到探测物体的矩形扩展目标状态;
    根据探测物体的矩形扩展目标状态,采用GM-PHD算法计算k时刻多目标预测PHD和k时刻多目标后验PHD,得到扩展目标跟踪器。
  3. 根据权利要求2所述的一种面向智能驾驶车辆的多传感器融合方法,其特征在于,矩形扩展目标状态表示为:
    ξ=(γ,x,X)
    其中,ξ表示探测物体扩展目标的状态,γ表示探测物体扩展目标的量测率状态,x表示探测物体扩展目标的运动状态,X表示探测物体扩展目标的扩展状态。
  4. 根据权利要求1所述的一种面向智能驾驶车辆的多传感器融合方法,其特征在于,构建边界框探测器的过程包括:采用基于RANSAC平面拟合算法对激光雷达数据进行预处理,得到目标点云;采用欧几里得算法对目标点云进行聚类;根据聚类的目标点云构建边界框探测器的状态向量,进而得到边界框探测器。
  5. 根据权利要求4所述的一种面向智能驾驶车辆的多传感器融合方法,其特征在于,边界框探测器的状态向量为:
    其中,x′表示状态向量,x表示探测目标的横坐标,y表示探测目标的纵坐标,v表示探测目标的速度,θ表示探测目标的方向角,ω表示探测目标的角速度,z表示探测目标的垂向坐标,表示探测目标的垂向速度,L表示探测目标的长度,W表示探测目标的宽度,H表示探测目标的高度。
  6. 根据权利要求1所述的一种面向智能驾驶车辆的多传感器融合方法,其特征在于,构建配置有IMM-UKF的JPDA跟踪器的过程包括:
    配置有IMM-UKF的JPDA跟踪器由输入交互模块、UKF滤波模块、概率更新模块、JPDA数据关联模块以及输出融合模块构成;
    输入交互模块根据UKF滤波模块中的UKF滤波器在k时刻的第一状态估计和第一协方差矩阵计算第二状态估计和第二协方差矩阵并输出;
    UKF滤波模块中的UKF滤波器根据输入交互模块的输出和k时刻的有效观测向量,输出k+1时刻的第三状态估计和第三协方差矩阵;
    概率更新模块根据UKF滤波模块的残差信息,计算k+1时刻运动模型的条件概率;
    JPDA数据关联模块根据第三状态估计、第三协方差矩阵和目标在运动模型下的第一测量信息,计算目标k+1时刻在运动模型下的第二测量信息;
    输出融合模块根据k+1时刻运动模型的条件概率、第二测量信息、第三状态估计和第三协方差矩阵计算融合后的状态估计和协方差矩阵。
  7. 根据权利要求1所述的一种面向智能驾驶车辆的多传感器融合方法,其特征在于,采用IMF算法对中心融合节点进行处理的公式包括:
    更新协方差:
    更新状态估计:
    其中,P(k|k)表示传感器从0到k时刻的全局协方差,P(k|k-1)表示传感器从0到k-1时刻的全局协方差,Pi(k|k)表示第i个传感器从0到k时刻的局部协方差,Pi(k|k-1)表示第i个传感器从0到k-1时刻的局部协方差,Nk表示传感器数量,表示传感器从0到k时刻的全局状态估计,表示传感器从0到k-1时刻的全局状态估计,表示第i个传感器从0到k时刻的局部状态估计,表示第i个传感器从0到k-1时刻的局部状态估计。
  8. 一种面向智能驾驶车辆的多传感器融合系统,该系统用于执行权利要求1~7中任意一项面向智能驾驶车辆的多传感器融合方法,其特征在于,包括:扩展目标跟踪模块、边界探测器模块、点目标跟踪模块和航迹融合模 块;
    所述扩展目标跟踪模块用于根据GM-PHD算法和探测物体的矩形目标模型对毫米波雷达的2维探测信息进行处理,得到探测物体的毫米波雷达航迹信息;
    所述边界框探测器模块用于根据基于RANSAC平面拟合算法和欧几里得算法对激光雷达的3维探测信息进行处理,得到探测物体的3维信息;
    所述点目标跟踪模块用于,采用配置有IMM-UKF的JPDA跟踪器对探测物体的3维信息进行处理,得到探测物体的激光雷达航迹信息;
    所述航迹融合模块用于融合探测物体的毫米波雷达航迹信息和探测物体的激光雷达航迹信息,得到全局航迹信息。
PCT/CN2023/096478 2022-07-01 2023-05-26 一种面向智能驾驶车辆的多传感器融合方法及系统 WO2024001629A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210768001.5A CN115061139A (zh) 2022-07-01 2022-07-01 一种面向智能驾驶车辆的多传感器融合方法及系统
CN202210768001.5 2022-07-01

Publications (1)

Publication Number Publication Date
WO2024001629A1 true WO2024001629A1 (zh) 2024-01-04

Family

ID=83204431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/096478 WO2024001629A1 (zh) 2022-07-01 2023-05-26 一种面向智能驾驶车辆的多传感器融合方法及系统

Country Status (2)

Country Link
CN (1) CN115061139A (zh)
WO (1) WO2024001629A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118033409A (zh) * 2024-04-15 2024-05-14 三峡金沙江川云水电开发有限公司 一种gcb灭弧室开关电阻测试方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061139A (zh) * 2022-07-01 2022-09-16 重庆邮电大学 一种面向智能驾驶车辆的多传感器融合方法及系统
CN117109588B (zh) * 2023-08-25 2024-04-30 中国船舶集团有限公司第七零七研究所九江分部 一种面向智能航行的多源探测多目标信息融合方法
CN117278941B (zh) * 2023-09-15 2024-06-25 广东省机场管理集团有限公司工程建设指挥部 基于5g网络和数据融合的车辆驾驶辅助定位方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778358A (zh) * 2015-04-09 2015-07-15 西安工程大学 多传感器存在监测区域部分重叠的扩展目标跟踪方法
CN108490927A (zh) * 2018-01-24 2018-09-04 天津大学 一种应用于无人驾驶汽车的目标跟踪系统及跟踪方法
CN110596693A (zh) * 2019-08-26 2019-12-20 杭州电子科技大学 一种迭代更新的多传感器gmphd自适应融合方法
CN112285700A (zh) * 2020-08-24 2021-01-29 江苏大学 基于激光雷达与毫米波雷达融合的机动目标跟踪方法
KR20210011585A (ko) * 2019-07-23 2021-02-02 충북대학교 산학협력단 확장 칼만필터를 이용한 차량 위치 추적 방법 및 장치
GB202107786D0 (en) * 2021-06-01 2021-07-14 Daimler Ag Track fusion for an autonomous vehicle
CN114325635A (zh) * 2021-12-30 2022-04-12 上海埃威航空电子有限公司 一种激光雷达和导航雷达目标融合方法
CN115061139A (zh) * 2022-07-01 2022-09-16 重庆邮电大学 一种面向智能驾驶车辆的多传感器融合方法及系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778358A (zh) * 2015-04-09 2015-07-15 西安工程大学 多传感器存在监测区域部分重叠的扩展目标跟踪方法
CN108490927A (zh) * 2018-01-24 2018-09-04 天津大学 一种应用于无人驾驶汽车的目标跟踪系统及跟踪方法
KR20210011585A (ko) * 2019-07-23 2021-02-02 충북대학교 산학협력단 확장 칼만필터를 이용한 차량 위치 추적 방법 및 장치
CN110596693A (zh) * 2019-08-26 2019-12-20 杭州电子科技大学 一种迭代更新的多传感器gmphd自适应融合方法
CN112285700A (zh) * 2020-08-24 2021-01-29 江苏大学 基于激光雷达与毫米波雷达融合的机动目标跟踪方法
GB202107786D0 (en) * 2021-06-01 2021-07-14 Daimler Ag Track fusion for an autonomous vehicle
CN114325635A (zh) * 2021-12-30 2022-04-12 上海埃威航空电子有限公司 一种激光雷达和导航雷达目标融合方法
CN115061139A (zh) * 2022-07-01 2022-09-16 重庆邮电大学 一种面向智能驾驶车辆的多传感器融合方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118033409A (zh) * 2024-04-15 2024-05-14 三峡金沙江川云水电开发有限公司 一种gcb灭弧室开关电阻测试方法

Also Published As

Publication number Publication date
CN115061139A (zh) 2022-09-16

Similar Documents

Publication Publication Date Title
WO2024001629A1 (zh) 一种面向智能驾驶车辆的多传感器融合方法及系统
CN112347840B (zh) 视觉传感器激光雷达融合无人机定位与建图装置和方法
CN111427370B (zh) 一种基于稀疏位姿调整的移动机器人的Gmapping建图方法
CN112258600A (zh) 一种基于视觉与激光雷达的同时定位与地图构建方法
CN109633590B (zh) 基于gp-vsmm-jpda的扩展目标跟踪方法
CN113781582A (zh) 基于激光雷达和惯导联合标定的同步定位与地图创建方法
CN112946624A (zh) 一种基于航迹管理方法的多目标跟踪算法
CN114659514B (zh) 一种基于体素化精配准的LiDAR-IMU-GNSS融合定位方法
CN116182837A (zh) 基于视觉激光雷达惯性紧耦合的定位建图方法
CN115731268A (zh) 基于视觉/毫米波雷达信息融合的无人机多目标跟踪方法
CN110187337B (zh) 一种基于ls和neu-ecef时空配准的高机动目标跟踪方法及系统
JP4116898B2 (ja) 目標追尾装置
CN111739066B (zh) 一种基于高斯过程的视觉定位方法、系统及存储介质
Xu et al. Dynamic vehicle pose estimation and tracking based on motion feedback for LiDARs
Gao et al. DC-Loc: Accurate automotive radar based metric localization with explicit doppler compensation
Li et al. Obstacle detection and tracking algorithm based on multi‐lidar fusion in urban environment
CN114608585A (zh) 一种移动机器人同步定位与建图方法及装置
WO2024114119A1 (zh) 一种基于双目相机引导的传感器融合方法
Ebert et al. Deep radar sensor models for accurate and robust object tracking
CN116047495B (zh) 一种用于三坐标雷达的状态变换融合滤波跟踪方法
Ramesh et al. Landmark-based RADAR SLAM for autonomous driving
CN115792890A (zh) 基于凝聚量测自适应互联的雷达多目标跟踪方法及系统
Li et al. FARFusion: A Practical Roadside Radar-Camera Fusion System for Far-Range Perception
CN115457080A (zh) 基于像素级图像融合的多目标车辆轨迹提取方法
CN115471526A (zh) 基于多源异构信息融合的自动驾驶目标检测与跟踪方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23829814

Country of ref document: EP

Kind code of ref document: A1