CN116363163B - Space target detection tracking method, system and storage medium based on event camera - Google Patents

Space target detection tracking method, system and storage medium based on event camera Download PDF

Info

Publication number
CN116363163B
CN116363163B CN202310239102.8A CN202310239102A CN116363163B CN 116363163 B CN116363163 B CN 116363163B CN 202310239102 A CN202310239102 A CN 202310239102A CN 116363163 B CN116363163 B CN 116363163B
Authority
CN
China
Prior art keywords
target
sequence
detection
current
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310239102.8A
Other languages
Chinese (zh)
Other versions
CN116363163A (en
Inventor
颜露新
刘昊岳
��昌毅
张磊
钟胜
周寒宇
段宇兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202310239102.8A priority Critical patent/CN116363163B/en
Publication of CN116363163A publication Critical patent/CN116363163A/en
Application granted granted Critical
Publication of CN116363163B publication Critical patent/CN116363163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于事件相机的空间目标检测跟踪方法、系统及存储介质,属于图像处理技术领域,方法包括:步骤S1、将采集的事件信号序列表征为三维体素;步骤S2、将所述三维体素输入至训练好的目标检测网络中,得到目标在当前t个时刻内的位置序列;步骤S3、拟合当前t个时刻内的位置序列,得到当前t个时刻内的目标运动轨迹函数;步骤S4、用所述目标轨迹函数计算下一时刻的目标位置,同时判断所述目标检测网络在下一时刻是否有检测结果,根据检测结果与轨迹预测的计算结果校正下一时刻目标的位置;并进行目标位置更新。本发明还提供了相应的检测跟踪系统。本发明能够采用事件数据进行目标检测跟踪,并提升空间远距弱小目标的检测跟踪精度。

The invention discloses a space target detection and tracking method, system and storage medium based on an event camera, which belongs to the field of image processing technology. The method includes: step S1, characterizing the collected event signal sequence into three-dimensional voxels; step S2, converting the collected event signal sequence into three-dimensional voxels. The three-dimensional voxels are input into the trained target detection network to obtain the position sequence of the target within the current t moments; step S3, fit the position sequence within the current t moments to obtain the target motion trajectory within the current t moments. Function; Step S4, use the target trajectory function to calculate the target position at the next moment, and at the same time determine whether the target detection network has a detection result at the next moment, and correct the target position at the next moment according to the detection result and the calculation result of trajectory prediction. ; and update the target location. The invention also provides a corresponding detection and tracking system. The present invention can use event data for target detection and tracking, and improves the detection and tracking accuracy of weak and small targets at long distances in space.

Description

基于事件相机的空间目标检测跟踪方法、系统及存储介质Space target detection and tracking method, system and storage medium based on event camera

技术领域Technical field

本发明属于图像处理技术领域,更具体地,涉及一种基于事件相机的空间目标检测跟踪方法、系统及存储介质。The invention belongs to the field of image processing technology, and more specifically, relates to an event camera-based spatial target detection and tracking method, system and storage medium.

背景技术Background technique

生物视觉系统采用与传统的数字信号处理器完全不同的神经形态计算架构和原理来达到识别、检测和跟踪的目的,在智能、操作、质量和功耗等方面远优于传统的数字系统。事件相机是一种神经形态新型传感器,具有高动态、低数据冗余和低延迟等特点,可以解决传统帧成像相机信息冗余、难以捕捉快速运动目标的问题,实现低功耗和低延时运动轨迹检测。Biological vision systems use neuromorphic computing architecture and principles that are completely different from traditional digital signal processors to achieve the purpose of identification, detection and tracking. They are far superior to traditional digital systems in terms of intelligence, operation, quality and power consumption. The event camera is a new type of neuromorphic sensor with the characteristics of high dynamics, low data redundancy and low latency. It can solve the problems of traditional frame imaging cameras such as information redundancy and difficulty in capturing fast moving targets, and achieve low power consumption and low latency. Motion trajectory detection.

传统帧成像传感器采用全局同步积分采样,记录外界环境的绝对亮度值,而事件相机采用各像素间异步差分采样,仅记录外界光照变化的变化量,相比较而言,事件信号更加稀疏、高效。然而,事件数据与图像帧之间数据形式的巨大差异,且目标-背景-干扰在事件数据中的表现形式也与在图像中有本质的不同,这使得现有技术中,无法直接采用事件数据进行目标检测跟踪。Traditional frame imaging sensors use global synchronous integral sampling to record the absolute brightness value of the external environment, while event cameras use asynchronous differential sampling between pixels to only record changes in external illumination changes. In comparison, event signals are more sparse and efficient. However, there is a huge difference in the data form between event data and image frames, and the manifestation of target-background-interference in event data is also essentially different from that in images, which makes it impossible to directly use event data in the existing technology. Perform target detection and tracking.

远距离空间目标检测与跟踪任务中,如天基对地目标检测与跟踪,目标的运行轨迹具有局部线性,不会突变的特点;目前目标检测与跟踪任务中存在的主要难点有:探测距离远导致目标信号衰减,探测路径中存在云层等背景杂波干扰等;现有的检测跟踪算法易产生目标的漏检、虚检,难以满足对远距弱小目标的高精度检测跟踪需求。In long-distance space target detection and tracking tasks, such as space-based ground target detection and tracking, the target's trajectory has the characteristics of local linearity and no sudden changes; the main difficulties in current target detection and tracking tasks are: long detection distance As a result, the target signal is attenuated, and there is background clutter interference such as clouds in the detection path. The existing detection and tracking algorithm is prone to missed detection and false detection of targets, and is difficult to meet the high-precision detection and tracking requirements for weak and small targets at long distances.

发明内容Contents of the invention

针对现有技术的缺陷和改进需求,本发明提供了一种基于事件相机的空间目标检测跟踪方法、系统及存储介质,其目的在于采用事件数据进行目标检测跟踪,并提升空间远距弱小目标的检测跟踪精度。In view of the shortcomings and improvement needs of the existing technology, the present invention provides a space target detection and tracking method, system and storage medium based on event cameras. The purpose is to use event data for target detection and tracking, and improve the accuracy of weak and small targets in long distances in space. Detection and tracking accuracy.

为实现上述目的,按照本发明的一个方面,提供了一种基于事件相机的空间目标检测跟踪方法,包括:In order to achieve the above object, according to one aspect of the present invention, a space target detection and tracking method based on event cameras is provided, including:

步骤S1、将采集的事件信号序列表征为三维体素;Step S1: Characterize the collected event signal sequence into three-dimensional voxels;

步骤S2、将所述三维体素输入至训练好的目标检测网络中,得到目标在当前t个时刻内的位置序列;其中,所述目标检测网络为卷积神经网络;Step S2: Input the three-dimensional voxels into the trained target detection network to obtain the position sequence of the target within the current t moments; wherein the target detection network is a convolutional neural network;

步骤S3、拟合当前t个时刻内的位置序列,得到当前t个时刻内的目标运动轨迹函数;Step S3: Fit the position sequence within the current t moments to obtain the target motion trajectory function within the current t moments;

步骤S4、用所述目标轨迹函数计算下一时刻的目标位置VC(t+1),同时判断所述目标检测网络在下一时刻是否有检测结果:Step S4: Use the target trajectory function to calculate the target position V C (t+1) at the next moment, and at the same time determine whether the target detection network has a detection result at the next moment:

若有检测结果,且检测的目标位置VS(t+1)与计算得到的所述目标位置VC(t+1)的欧式距离小于给定阈值,则采用所述检测的目标位置VS(t+1)作为下一时刻校正后的目标位置V(t+1),否则,采用计算得到的所述目标位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)If there is a detection result, and the Euclidean distance between the detected target position V S (t+1) and the calculated target position V C (t+1) is less than a given threshold, then the detected target position V S is used. (t+1) as the corrected target position V (t+1) at the next moment. Otherwise, the calculated target position V C (t+1) is used as the corrected target position V (t +1) at the next moment. +1) ;

若无检测结果,采用计算得到的所述目标位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)If there is no detection result, the calculated target position V C (t+1) is used as the corrected target position V (t+1) at the next moment.

进一步地,还包括步骤:Further, it also includes steps:

步骤S5、采用先进先出的方式,用所述校正后的目标位置V(t+1)更新目标在当前t个时刻内的位置序列;Step S5: Using the first-in, first-out method, use the corrected target position V (t+1) to update the position sequence of the target within the current t times;

步骤S6、重复步骤S3-步骤S5,实现目标跟踪轨迹更新。Step S6: Repeat steps S3 to S5 to update the target tracking trajectory.

进一步地,步骤S2包括:Further, step S2 includes:

步骤S21、将所述三维体素输入至训练好的目标检测网络中,输出目标在当前t个时刻内的检测结果序列;Step S21: Input the three-dimensional voxels into the trained target detection network, and output the detection result sequence of the target within the current t times;

步骤S22、剔除所述检测结果序列中的异常检测结果,得到正常的检测结果序列;Step S22: Eliminate abnormal detection results in the detection result sequence to obtain a normal detection result sequence;

步骤S23、取所述正常的检测结果序列中的最后n个值作为所述目标在当前t个时刻内的位置序列,其中,n为正整数。Step S23: Take the last n values in the normal detection result sequence as the position sequence of the target within the current t times, where n is a positive integer.

进一步地,步骤S22中,采用随机抽样一致方法、卡尔曼滤波法、或相关滤波法剔除所述检测结果序列中的异常检测结果。Further, in step S22, the random sampling consensus method, Kalman filtering method, or correlation filtering method is used to eliminate abnormal detection results in the detection result sequence.

进一步地,步骤S1包括:Further, step S1 includes:

步骤S11、构建二维空间与一维时间组成的三维时空体,并将所述三维时空体划分成均匀分布的立方体网格;Step S11: Construct a three-dimensional space-time body composed of two-dimensional space and one-dimensional time, and divide the three-dimensional space-time body into evenly distributed cubic grids;

步骤S12、根据所述事件信号序列的触发位置及触发时刻,将所述事件信号序列填充到对应的立方体网格中,得到所述事件信号序列的三维体素。Step S12: Fill the event signal sequence into the corresponding cubic grid according to the trigger position and trigger time of the event signal sequence to obtain the three-dimensional voxels of the event signal sequence.

进一步地,所述三维体素V(x,y,t)为:Further, the three-dimensional voxel V(x,y,t) is:

式中,0≤x≤W,0≤y≤H,H和W分别表示事件相机垂直方向及水平方向的像素数量,t表示当前时刻,0≤t≤B,B为在指定时间内,时间维度被量化的单元数量;i代表事件的序号,取值为[1,N],N代表事件的总数量,xi、yi表示事件i触发的像素位置;pi∈[1,-1]为事件i的极性,当该像素处(xi,yi)光强由暗变亮时pi=1,否则pi=-1;kb为插值函数,kb(a)=max(0,1-|a|),a表示在所述三维体素中的某一维度的插值,表示当前事件触发时刻在所述三维体素时间维度的相对位置,表达式为:In the formula, 0≤x≤W, 0≤y≤H, H and W represent the number of pixels in the vertical and horizontal directions of the event camera respectively, t represents the current moment, 0≤t≤B, and B is the time within the specified time. The number of units whose dimensions are quantified; i represents the sequence number of the event, the value is [1, N], N represents the total number of events, x i , y i represent the pixel position triggered by event i; p i ∈ [1,-1 ] is the polarity of event i. When the light intensity at the pixel (x i , y i ) changes from dark to bright, p i =1, otherwise p i =-1; k b is the interpolation function, k b (a) = max(0,1-|a|), a represents the interpolation of a certain dimension in the three-dimensional voxel, Indicates the relative position of the current event triggering moment in the three-dimensional voxel time dimension, and the expression is:

式中,ti表示第i个事件触发的时间戳。In the formula, t i represents the timestamp triggered by the i-th event.

进一步地,步骤S3中,采用线性拟合的方法拟合当前t个时刻内的位置序列。Further, in step S3, a linear fitting method is used to fit the position sequence within the current t times.

按照本发明的另一方面,提供了一种基于事件相机的空间目标检测跟踪系统,包括:According to another aspect of the present invention, an event camera-based spatial target detection and tracking system is provided, including:

事件信号表征模块,用于将采集的事件信号序列表征为三维体素;The event signal representation module is used to characterize the collected event signal sequence into three-dimensional voxels;

位置序列获取模块,用于将所述三维体素输入至训练好的目标检测网络中,得到目标在当前t个时刻内的位置序列;其中,所述目标检测网络为卷积神经网络;A position sequence acquisition module, used to input the three-dimensional voxels into the trained target detection network to obtain the position sequence of the target within the current t moments; wherein the target detection network is a convolutional neural network;

目标运动轨迹函数拟合模块,用于拟合当前t个时刻内的位置序列,得到当前t个时刻内的目标运动轨迹函数;The target motion trajectory function fitting module is used to fit the position sequence within the current t moments to obtain the target motion trajectory function within the current t moments;

目标位置校正模块,用于用所述目标轨迹函数计算下一时刻的目标位置VC(t+1),同时判断所述目标检测网络在下一时刻是否有检测结果:The target position correction module is used to calculate the target position V C(t+1) at the next moment using the target trajectory function, and at the same time determine whether the target detection network has a detection result at the next moment:

若有检测结果,且检测的目标位置VS(t+1)与计算得到的所述目标位置VC(t+1)的欧式距离小于给定阈值,则采用所述检测的目标位置VS(t+1)作为下一时刻校正后的目标位置V(t+1),否则,采用计算得到的所述目标位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)If there is a detection result, and the Euclidean distance between the detected target position V S (t+1) and the calculated target position V C (t+1) is less than a given threshold, then the detected target position V S is used. (t+1) as the corrected target position V (t+1) at the next moment. Otherwise, the calculated target position V C (t+1) is used as the corrected target position V (t +1) at the next moment. +1) ;

若无检测结果,采用计算得到的所述目标位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)If there is no detection result, the calculated target position V C (t+1) is used as the corrected target position V (t+1) at the next moment.

进一步地,还包括:Furthermore, it also includes:

位置序列更新模块,用于采用先进先出的方式,用所述校正后的目标位置V(t+1)更新目标在当前t个时刻内的位置序列;The position sequence update module is used to update the position sequence of the target within the current t moments using the corrected target position V (t+1) in a first-in, first-out manner;

目标跟踪模块,用于重复执行目标运动轨迹函数拟合模块及目标位置校正模块,实现目标跟踪轨迹更新。The target tracking module is used to repeatedly execute the target motion trajectory function fitting module and the target position correction module to update the target tracking trajectory.

按照本发明的另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如第一方面任一项所述的方法。According to another aspect of the present invention, a computer-readable storage medium is provided, on which a computer program is stored. When the program is executed by a processor, the method according to any one of the first aspects is implemented.

总体而言,通过本发明所构思的以上技术方案,能够取得以下有益效果:Generally speaking, through the above technical solutions conceived by the present invention, the following beneficial effects can be achieved:

(1)本发明的检测跟踪方法,将空间目标的事件数据表征为三维体素网格,以该三维体素网格作为目标检测网络的输入,进行初始轨迹的预测,并针对空间目标运行轨迹的特点,用当前t个时刻的目标运动轨迹函数预测下一时刻目标的位置,当空间目标被云层遮挡或者外界环境导致的遮挡使目标消失时,直接采用预测的目标位置作为下一时刻的目标位置,以解决目标检测网络无法检测目标或者漏检目标的问题;若目标检测网络输出的检测结果与预测的目标位置差异较大时,采用预测的目标位置作为下一时刻的目标位置,避免因杂波干扰等使得目标检测网络产生虚检的问题。本发明的这种通过目标检测与局部拟合轨迹相结合的方式,可以保证在每一时刻都有较为准确的目标位置,避免目标无法检测、漏检及虚检,可以实现高速、高精度的目标跟踪。(1) The detection and tracking method of the present invention represents the event data of the space target as a three-dimensional voxel grid, and uses the three-dimensional voxel grid as the input of the target detection network to predict the initial trajectory and target the space target's operating trajectory. Characteristics of using the target motion trajectory function of the current t moments to predict the target position at the next moment. When the space target is obscured by clouds or occlusion caused by the external environment and the target disappears, the predicted target position is directly used as the target at the next moment. position to solve the problem that the target detection network cannot detect the target or misses the target; if the detection result output by the target detection network is significantly different from the predicted target position, the predicted target position will be used as the target position at the next moment to avoid unnecessary errors. Clutter interference, etc. cause false detection problems in the target detection network. The method of the present invention that combines target detection with local fitting trajectories can ensure a relatively accurate target position at every moment, avoid undetectable, missed and false detection of targets, and achieve high-speed and high-precision detection. Target Tracking.

(2)基于得到的校正后的目标位置,通过更新目标在当前t个时刻内的位置序列,进行下一轮次的目标轨迹预测,可以实现跟踪轨迹更新。(2) Based on the corrected target position obtained, by updating the position sequence of the target within the current t moments, and predicting the next round of target trajectory, the tracking trajectory update can be achieved.

(3)作为优选,通过对目标检测网络当前t个时刻内的检测结果序列进行异常检测结果剔除,可以加快检测跟踪方法的收敛速度,同时提升检测跟踪的准确性;并提供了剔除异常检测结果相应的方法。(3) As an option, by removing abnormal detection results from the sequence of detection results within the current t moments of the target detection network, the convergence speed of the detection and tracking method can be accelerated, and the accuracy of detection and tracking can be improved at the same time; and a method for eliminating abnormal detection results is provided. Corresponding method.

(4)本发明通过将一维的事件序列表征为三维体素网格,弥补了一维事件序列不存在的空间维度局部特征,在事件数据与图像数据之间建立起了连接,使得后续可以采用事件数据进行目标检测跟踪;采用事件数据具有高动态范围、高帧率、低数据率的优势。(4) By characterizing the one-dimensional event sequence as a three-dimensional voxel grid, the present invention makes up for the local characteristics of the spatial dimension that do not exist in the one-dimensional event sequence, and establishes a connection between the event data and the image data, so that subsequent Use event data for target detection and tracking; using event data has the advantages of high dynamic range, high frame rate, and low data rate.

(5)基于空间目标的运行轨迹具有局部线性,不会突变的特点,作为优选,采用线性拟合的方法拟合当前t个时刻内的位置序列,方法合理,且能够提升检测跟踪精度。(5) The operating trajectory based on the space target has the characteristics of local linearity and no sudden changes. As an option, the linear fitting method is used to fit the position sequence within the current t moments. The method is reasonable and can improve the detection and tracking accuracy.

总而言之,本发明能够对事件序列进行合理的表征,并通过轨迹预测与检测网络相结合的方式提升空间远距弱小目标检测跟踪的准确性。All in all, the present invention can reasonably characterize the event sequence and improve the accuracy of detection and tracking of weak and small targets at long distances in space through the combination of trajectory prediction and detection network.

附图说明Description of the drawings

图1为本发明的基于事件相机的空间目标检测跟踪方法示意图。Figure 1 is a schematic diagram of the event camera-based spatial target detection and tracking method of the present invention.

图2为本发明的基于事件相机的空间目标检测跟踪方法流程图。Figure 2 is a flow chart of the event camera-based spatial target detection and tracking method of the present invention.

图3为本发明采用随机抽样一致方法排除检测异常值及线性拟合的示意图。Figure 3 is a schematic diagram of the present invention using a random sampling consistent method to exclude and detect outliers and linear fitting.

图4为发明对目标检测结果与轨迹预测结果相互校正的示意图。Figure 4 is a schematic diagram of the invention's mutual correction of target detection results and trajectory prediction results.

图5为本发明轨迹校正与更新的流程示意图。Figure 5 is a schematic flow chart of trajectory correction and updating according to the present invention.

图6(a)为本发明实施例中,采用目标检测网络输出的轨迹;图6(b)为采用本发明的检测跟踪方法进行轨迹拟合的示意图;图6(c)为目标轨迹在事件帧上的可视化效果示意图。Figure 6(a) shows the trajectory output by the target detection network in the embodiment of the present invention; Figure 6(b) is a schematic diagram of trajectory fitting using the detection and tracking method of the present invention; Figure 6(c) shows the target trajectory during the event Illustration of the visualization effect on the frame.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。In order to make the purpose, technical solutions and advantages of the present invention more clear, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention and are not intended to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.

如图1、图2所示,本发明的基于事件相机的空间目标检测跟踪方法,主要包括如下步骤:As shown in Figures 1 and 2, the event camera-based spatial target detection and tracking method of the present invention mainly includes the following steps:

步骤S1、将事件相机采集的事件信号序列表征为三维体素;Step S1: Characterize the event signal sequence collected by the event camera into three-dimensional voxels;

步骤S2、将三维体素输入至训练好的目标检测网络中,得到目标在当前t个时刻内的位置序列;其中,目标检测网络为卷积神经网络;Step S2: Input the three-dimensional voxels into the trained target detection network to obtain the position sequence of the target within the current t moments; where the target detection network is a convolutional neural network;

步骤S3、拟合当前t个时刻内的位置序列,得到当前t个时刻内的目标运动轨迹函数;Step S3: Fit the position sequence within the current t moments to obtain the target motion trajectory function within the current t moments;

步骤S4、检测结果与轨迹预测的计算结果相互校正:用当前t个时刻的目标轨迹函数计算t+1时刻(下一时刻)目标的位置VC(t+1),同时判断目标检测网络在下一时刻是否有检测结果:若目标检测网络在下一时刻t+1无检测结果(即目标检测网络在t+1时刻无输出),直接采用计算得到的下一时刻目标的位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)Step S4, the detection results and the calculation results of trajectory prediction are mutually corrected: use the target trajectory function of the current t moments to calculate the position of the target at t+1 (next moment) V C(t+1) , and at the same time determine whether the target detection network is next Whether there is a detection result at a moment: If the target detection network has no detection result at the next moment t+1 (that is, the target detection network has no output at t+1), the calculated position of the target at the next moment V C(t+ 1) As the corrected target position V (t+1) at the next moment;

若目标检测网络在下一时刻有检测结果,检测的目标位置为VS(t+1),且检测的目标位置VS(t+1)与计算得到的下一时刻目标的位置VC(t+1)的欧式距离小于给定阈值,则采用检测的目标位置为VS(t+1)作为下一时刻校正后的目标位置V(t+1),否则认为目标检测网络产生的虚检,此时采用计算得到的下一时刻t+1目标的位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)If the target detection network has a detection result at the next moment, the detected target position is V S(t+1) , and the detected target position V S(t+1) is the same as the calculated target position at the next moment V C(t +1) is less than the given threshold, then the detected target position V S (t+1) is used as the corrected target position V (t+1) at the next moment, otherwise it is considered a false detection generated by the target detection network , at this time, the calculated target position V C(t+1) at the next time t+1 is used as the corrected target position V (t+1) at the next time.

通过上述步骤进行目标轨迹预测,基于步骤S4得到的校正后的目标位置V(t+1),进行下一轮目标轨迹预测,实现目标跟踪轨迹更新,包括如下步骤:Target trajectory prediction is performed through the above steps, and based on the corrected target position V (t+1) obtained in step S4, the next round of target trajectory prediction is performed to achieve target tracking trajectory update, which includes the following steps:

步骤S5、目标位置序列更新:采用先进先出的方式,用校正后的目标位置V(t+1)更新目标在当前t个时刻内的位置序列;Step S5, target position sequence update: using the first-in, first-out method, use the corrected target position V (t+1) to update the target position sequence within the current t moments;

步骤S6、重复步骤S3-步骤S5,实现目标跟踪轨迹更新。Step S6: Repeat steps S3 to S5 to update the target tracking trajectory.

具体地,步骤S1中,包括步骤:Specifically, step S1 includes the steps:

步骤S11、构建二维空间与一维时间组成的三维时空体,并将该三维时空体划分成均匀分布的立方体网格;Step S11: Construct a three-dimensional space-time body composed of two-dimensional space and one-dimensional time, and divide the three-dimensional space-time body into evenly distributed cubic grids;

步骤S12、根据事件相机采集的事件信号序列的触发位置及触发时刻,将一维的事件信号序列填充到对应的立方体网格中,得到事件相机采集的事件信号序列{(xi,yi,ti,pi)}i∈[1,N]的三维体素V(x,y,t),以实现对事件信号的三维网格化表征;Step S12: According to the trigger position and trigger time of the event signal sequence collected by the event camera, fill the one-dimensional event signal sequence into the corresponding cubic grid to obtain the event signal sequence {(xi,y i , t i , p i )} i∈[1,N] three-dimensional voxels V(x,y,t) to achieve three-dimensional grid representation of event signals;

其中,0≤x≤W,0≤y≤H,H和W分别表示事件相机垂直方向及水平方向的像素数量,t表示当前时刻;i代表事件的序号,取值为[1,N],N代表事件的总数量,xi、yi表示事件i触发的像素位置,ti表示事件i触发的时间戳,pi∈[1,-1]为事件i的极性,当该像素处(xi,yi)光强由暗变亮时pi=1,否则pi=-1。Among them, 0≤x≤W, 0≤y≤H, H and W represent the number of pixels in the vertical and horizontal directions of the event camera respectively, t represents the current moment; i represents the sequence number of the event, and the value is [1, N], N represents the total number of events, x i and y i represent the pixel position triggered by event i, t i represents the timestamp triggered by event i, p i ∈ [1,-1] is the polarity of event i, when the pixel is ( xi , y i ) When the light intensity changes from dark to bright, p i =1, otherwise p i =-1.

其中,步骤S12中,由事件序列生成的体素网格(三维体素)V(x,y,t)为:Among them, in step S12, the voxel grid (three-dimensional voxels) V(x, y, t) generated by the event sequence is:

式中,kb(a)为插值函数,kb(a)=max(0,1-|a|),a表示在三维体素中的某一维度的插值,表示当前事件触发时刻在体素网格时间维度的相对位置,其表达式为:In the formula, k b (a) is the interpolation function, k b (a) = max (0,1-|a|), a represents the interpolation of a certain dimension in the three-dimensional voxel, Represents the relative position of the current event triggering moment in the time dimension of the voxel grid, and its expression is:

式中,t1表示第一个事件触发的时间戳,tN表示第N个事件触发的时间戳,B为在指定时间内,时间维度被量化的单元数量,t表示当前时刻,0≤t≤B。In the formula, t 1 represents the timestamp triggered by the first event, t N represents the timestamp triggered by the Nth event, B is the number of units whose time dimension is quantified within the specified time, t represents the current moment, 0≤t ≤B.

经过上述事件信号序列的三维体素表征,可以将事件序列表征为W×H×B的张量,也即,V(x,y,t)表示W×H×B的张量,将该张量输入至步骤S2中的目标检测网络中进行目标初始位置的检测。After the above three-dimensional voxel representation of the event signal sequence, the event sequence can be characterized as a W×H×B tensor, that is, V(x, y, t) represents a W×H×B tensor. The amount is input into the target detection network in step S2 to detect the initial position of the target.

本发明通过将一维的事件序列表征为三维体素网格,弥补了一维事件序列不存在的空间维度局部特征,在事件数据与图像数据之间建立起了连接,使得后续可以采用事件数据进行目标检测跟踪。具体地,步骤S2包括:By characterizing the one-dimensional event sequence as a three-dimensional voxel grid, the present invention makes up for the local characteristics of the spatial dimension that do not exist in the one-dimensional event sequence, and establishes a connection between the event data and the image data, so that the event data can be used subsequently. Perform target detection and tracking. Specifically, step S2 includes:

步骤S21、将三维体素输入至训练好的目标检测网络中,输出目标在当前t个时刻内的检测结果序列;Step S21: Input the three-dimensional voxels into the trained target detection network, and output the detection result sequence of the target within the current t moments;

步骤S22、将检测结果序列作为轨迹预测的初始化点,剔除检测结果序列中的异常检测结果,得到正常的检测结果序列;Step S22: Use the detection result sequence as the initialization point for trajectory prediction, eliminate abnormal detection results in the detection result sequence, and obtain a normal detection result sequence;

步骤S23、取正常的检测结果序列中的最后n个值作为目标在当前t个时刻内的位置序列,其中,n为正整数。本实施例中,目标检测网络输出的目标在当前t个时刻内的检测结果序列长度为10,n的取值为5;实际应用中,n取值根据经验获得。Step S23: Take the last n values in the normal detection result sequence as the position sequence of the target within the current t times, where n is a positive integer. In this embodiment, the length of the detection result sequence output by the target detection network within the current t moments is 10, and the value of n is 5; in practical applications, the value of n is obtained based on experience.

作为优选,步骤S22中,通过随机抽样一致方法、卡尔曼滤波法、或相关滤波法剔除检测结果序列中的异常检测结果。Preferably, in step S22, abnormal detection results in the detection result sequence are eliminated through a random sampling consensus method, a Kalman filter method, or a correlation filter method.

如图3所示,采用随机抽样一致方法剔除检测结果序列中的异常检查结果,主要包括:多次随机选择检测结果序列中的任意两个检测结果进行线性拟合;其中由离群点(异常检测结果)拟合出的直线与正常点(正常的检测结果)拟合的直线斜率存在较大差异,利用这种差异排除异常检测点。由于检测结果序列中正常检测点的数量大于异常检测点的数量,经过多次随机抽样后,可分别得到正常检测点与异常检测点的集合,从而排除异常检测点。As shown in Figure 3, the random sampling consistent method is used to eliminate abnormal inspection results in the detection result sequence, which mainly includes: randomly selecting any two detection results in the detection result sequence multiple times for linear fitting; among which the outlier points (abnormality There is a big difference in the slope of the straight line fitted by the test result) and the straight line fitted by the normal point (normal test result). This difference is used to exclude abnormal detection points. Since the number of normal detection points in the detection result sequence is greater than the number of abnormal detection points, after multiple random samplings, a set of normal detection points and abnormal detection points can be obtained respectively, thereby eliminating abnormal detection points.

通过异常检测结果排除,可以加快检测跟踪方法的收敛速度,同时提升检测跟踪的准确性。By excluding abnormal detection results, the convergence speed of the detection and tracking method can be accelerated and the accuracy of detection and tracking can be improved.

作为优选,目标检测网络为Faster-RCNN、YOLO等卷积神经网络。As a preferred option, the target detection network is a convolutional neural network such as Faster-RCNN and YOLO.

步骤S3中,通过线性拟合或其他数据拟合方法拟合当前t个时刻内的位置序列,得到当前t个时刻的目标运动轨迹函数。In step S3, the position sequence within the current t moments is fitted through linear fitting or other data fitting methods to obtain the target motion trajectory function of the current t moments.

步骤S4中,本实施例中,欧氏距离的给定阈值为5(以像素位单位计算),在其他实施例中,根据检测跟踪精度选择。In step S4, in this embodiment, the given threshold of the Euclidean distance is 5 (calculated in pixel units). In other embodiments, it is selected according to the detection and tracking accuracy.

具体地,如图4和图5所示,图中的预测值是指计算得到的下一时刻目标的位置VC(t+1),检测值是指目标检测网络在下一时刻输出的检测结果,即检测的目标位置为VS(t+1)。图4中,左侧图表示t+1时刻预测值与检测值的欧式距离小于设定阈值,采用检测值作为下一时刻目标的位置;右侧图表示t+1时刻预测值与检测值的欧式距离大于设定阈值,采用预测值作为下一时刻目标的位置。Specifically, as shown in Figures 4 and 5, the predicted value in the figure refers to the calculated position V C(t+1) of the target at the next moment, and the detection value refers to the detection result output by the target detection network at the next moment. , that is, the detected target position is V S(t+1) . In Figure 4, the left picture shows that the Euclidean distance between the predicted value and the detected value at time t+1 is less than the set threshold, and the detected value is used as the target position at the next moment; the right picture shows the difference between the predicted value and the detected value at time t+1. If the Euclidean distance is greater than the set threshold, the predicted value is used as the target position at the next moment.

本发明的检测跟踪方法,用当前t个时刻的目标运动轨迹函数预测下一时刻目标的位置,当空间目标被云层遮挡或者外界环境导致的遮挡使目标消失时,直接采用预测的目标位置作为下一时刻的目标位置,以解决目标检测网络无法检测目标或者漏检目标的问题;若目标检测网络输出的检测结果与预测的目标位置差异较大时,采用预测的目标位置作为下一时刻的目标位置,避免因杂波干扰等使得目标检测网络产生虚检的问题。针对空间目标运行轨迹的特点,本发明的这种通过目标检测与局部线性拟合轨迹相结合的方式,在每一时刻都有较为准确的目标位置,可以避免目标无法检测、漏检及虚检,可以实现高速、高精度的目标跟踪。The detection and tracking method of the present invention uses the target motion trajectory function of the current t moments to predict the target position at the next moment. When the space target is obscured by clouds or the occlusion caused by the external environment causes the target to disappear, the predicted target position is directly used as the next target. The target position at one moment to solve the problem that the target detection network cannot detect the target or misses the target; if the detection result output by the target detection network is significantly different from the predicted target position, the predicted target position will be used as the target at the next moment. location to avoid false detection problems in the target detection network due to clutter interference. In view of the characteristics of the space target's running trajectory, the method of the present invention that combines target detection and local linear fitting trajectory has a relatively accurate target position at every moment, which can avoid undetectable, missed detection and false detection of the target. , which can achieve high-speed and high-precision target tracking.

在本发明实施例中,如图6(a)所示,直接采用目标检测网络进行目标轨迹检测,可以看出中间有漏检的位置,以及还产生了虚检点;如图6(b)所示,为采用本发明的检测跟踪方法进行轨迹拟合的结果,图6(c)为目标轨迹在事件帧上的可视化效果,图6(a)-6(b)中的横纵坐标表示像素位置,图中的直线为采用随机抽样一致性算法(RANSAC)线性拟合后的目标运行轨迹,图6(a)中的点表示采用目标检测网络得到的目标位置,图6(b)中的点表示采用本发明的预测和检测相结合的方法得到的目标位置,可以看出,本发明能够利用事件相机在背景复杂的环境中准确地排除虚检目标,补全漏检目标,通过目标检测与局部线性拟合轨迹相结合的方式实现高速、高精度的目标跟踪。In the embodiment of the present invention, as shown in Figure 6(a), the target detection network is directly used for target trajectory detection. It can be seen that there are missed detection positions in the middle, and false detection points are also generated; as shown in Figure 6(b) is shown as the result of trajectory fitting using the detection and tracking method of the present invention. Figure 6(c) is the visualization effect of the target trajectory on the event frame. The horizontal and vertical coordinates in Figures 6(a)-6(b) represent pixels. Position. The straight line in the figure is the target trajectory after linear fitting using the Random Sampling Consistency Algorithm (RANSAC). The points in Figure 6(a) represent the target position obtained by using the target detection network. The points in Figure 6(b) The points represent the target positions obtained by using the combined prediction and detection method of the present invention. It can be seen that the present invention can use event cameras to accurately exclude falsely detected targets in an environment with complex backgrounds, complete missed targets, and pass target detection. Combined with local linear fitting trajectories, high-speed and high-precision target tracking is achieved.

按照本发明的另一方面,提供了一种基于事件相机的空间目标检测跟踪系统,主要包括:According to another aspect of the present invention, a space target detection and tracking system based on event cameras is provided, which mainly includes:

事件信号表征模块,用于将采集的事件信号序列表征为三维体素;The event signal representation module is used to characterize the collected event signal sequence into three-dimensional voxels;

位置序列获取模块,用于将三维体素输入至训练好的目标检测网络中,得到目标在当前t个时刻内的位置序列;其中,目标检测网络为卷积神经网络;The position sequence acquisition module is used to input three-dimensional voxels into the trained target detection network to obtain the position sequence of the target within the current t moments; where the target detection network is a convolutional neural network;

目标运动轨迹函数拟合模块,用于拟合当前t个时刻内的位置序列,得到当前t个时刻内的目标运动轨迹函数;The target motion trajectory function fitting module is used to fit the position sequence within the current t moments to obtain the target motion trajectory function within the current t moments;

目标位置校正模块,用于用目标轨迹函数计算下一时刻的目标位置VC(t+1),同时判断目标检测网络在下一时刻是否有检测结果:The target position correction module is used to calculate the target position V C(t+1) at the next moment using the target trajectory function, and at the same time determine whether the target detection network has a detection result at the next moment:

若有检测结果,且检测的目标位置VS(t+1)与计算得到的目标位置VC(t+1)的欧式距离小于给定阈值,则采用检测的目标位置VS(t+1)作为下一时刻校正后的目标位置V(t+1),否则,采用计算得到的目标位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)If there is a detection result, and the Euclidean distance between the detected target position V S(t+1) and the calculated target position V C(t+1) is less than the given threshold, then the detected target position V S(t+1) is used ) as the corrected target position V (t+1) at the next moment, otherwise, use the calculated target position V C (t+1) as the corrected target position V (t+1) at the next moment;

若无检测结果,采用计算得到的目标位置VC(t+1)作为下一时刻校正后的目标位置V(t+1)If there is no detection result, the calculated target position V C (t+1) is used as the corrected target position V (t+1) at the next moment.

还包括:Also includes:

位置序列更新模块,用于采用先进先出的方式,用校正后的目标位置V(t+1)更新目标在当前t个时刻内的位置序列;The position sequence update module is used to update the position sequence of the target within the current t times with the corrected target position V (t+1) in a first-in, first-out manner;

目标跟踪模块,用于重复执行目标运动轨迹函数拟合模块、目标位置校正模块及位置序列更新模块,实现目标跟踪轨迹更新。The target tracking module is used to repeatedly execute the target motion trajectory function fitting module, target position correction module and position sequence update module to achieve target tracking trajectory update.

其中,每个模块对应于实现上述实施例中的基于事件相机的空间目标检测跟踪方法的每个步骤。按照本发明的另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序,程序被处理器执行时实现如上述实施例中的基于事件相机的空间目标检测跟踪方法的每个步骤。Each module corresponds to each step of implementing the event camera-based spatial target detection and tracking method in the above embodiment. According to another aspect of the present invention, a computer-readable storage medium is provided, on which a computer program is stored. When the program is executed by a processor, each of the event camera-based spatial target detection and tracking methods in the above embodiments is implemented. step.

本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。It is easy for those skilled in the art to understand that the above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent substitutions and improvements, etc., made within the spirit and principles of the present invention, All should be included in the protection scope of the present invention.

Claims (5)

1. The method for detecting and tracking the spatial target based on the event camera is characterized by comprising the following steps:
step S1, representing an acquired event signal sequence as a three-dimensional voxel;
s2, inputting the three-dimensional voxels into a trained target detection network to obtain a position sequence of the target in the current t moments; wherein the target detection network is a convolutional neural network;
step S3, fitting a position sequence in the current t moments to obtain a target motion track function in the current t moments;
s4, calculating a target position V at the next moment by using the target motion trail function C(t+1) And judging whether the target detection network has a detection result at the next moment:
if there is a detection result, and the detected target position V S(t+1) And the calculated target position V C(t+1) If the Euclidean distance of (2) is smaller than a given threshold, the detected target position V is adopted S(t+1) As the target position V after correction at the next time (t+1) Otherwise, adopting the calculated target position V C(t+1) As the target position V after correction at the next time (t+1)
If no detection result is obtained, the calculated target position V is adopted C(t+1) As the target position V after correction at the next time (t+1)
The step S1 comprises the following steps:
s11, constructing a three-dimensional space-time body consisting of a two-dimensional space and a one-dimensional time, and dividing the three-dimensional space-time body into evenly distributed cube grids;
step S12, filling the event signal sequence into a corresponding cube grid according to the triggering position and the triggering time of the event signal sequence to obtain a three-dimensional voxel of the event signal sequence;
the three-dimensional voxels V (x, y, t) are:
wherein x is more than or equal to 0 and less than or equal to W, y is more than or equal to 0 and less than or equal to H, H and W respectively represent the number of pixels in the vertical direction and the horizontal direction of the event camera, t represents the current moment, t is more than or equal to 0 and less than or equal to B, and B is the number of units with quantized time dimension in a specified time; i represents the sequence number of the event, and the value is [1, N]N represents the total number of events, x i 、y i Representing the pixel location triggered by event i; p is p i ∈[1,-1]For the polarity of event i, when the pixel (x i ,y i ) P when the light intensity is changed from dark to bright i =1, otherwise p i =-1;k b As an interpolation function, k b (a) =max (0, 1- |a|), a represents the interpolation of a certain dimension in the three-dimensional voxel,and representing the relative position of the current event trigger moment in the three-dimensional voxel time dimension, wherein the expression is as follows:
wherein t is i A time stamp representing the i-th event trigger;
the step S2 comprises the following steps:
s21, inputting the three-dimensional voxels into a trained target detection network, and outputting a detection result sequence of the target in the current t moments;
s22, eliminating abnormal detection results in the detection result sequence to obtain a normal detection result sequence;
s23, taking the last n values in the normal detection result sequence as a position sequence of the target in the current t moments, wherein n is a positive integer;
the method also comprises the steps of:
step S5, adopting a first-in first-out mode to use the corrected target position V (t+1) Updating the position sequence of the target in the current t moments;
and S6, repeating the steps S3-S5 to update the target tracking track.
2. The method according to claim 1, wherein in step S22, an anomaly detection result in the detection result sequence is removed by a random sampling coincidence method, a kalman filtering method, or a correlation filtering method.
3. The method according to claim 1, characterized in that in step S3, a linear fitting method is used to fit the sequence of positions within the current t moments.
4. A spatial target detection tracking system based on an event camera, comprising:
the event signal characterization module is used for characterizing the acquired event signal sequence as a three-dimensional voxel; the method specifically comprises the following steps: constructing a three-dimensional space-time body consisting of a two-dimensional space and a one-dimensional time, and dividing the three-dimensional space-time body into evenly distributed cube grids; filling the event signal sequence into a corresponding cube grid according to the triggering position and the triggering time of the event signal sequence to obtain a three-dimensional voxel of the event signal sequence; the three-dimensional voxels V (x, y, t) are:
wherein x is more than or equal to 0 and less than or equal to W, y is more than or equal to 0 and less than or equal to H, H and W respectively represent the number of pixels in the vertical direction and the horizontal direction of the event camera, t represents the current moment, t is more than or equal to 0 and less than or equal to B, and B is the number of units with quantized time dimension in a specified time; i represents the sequence number of the event, and the value is [1, N]N represents the total number of events, x i 、y i Representing thingsPixel location triggered by element i; p is p i ∈[1,-1]For the polarity of event i, when the pixel (x i ,y i ) P when the light intensity is changed from dark to bright i =1, otherwise p i =-1;k b As an interpolation function, k b (a) =max (0, 1- |a|), a represents the interpolation of a certain dimension in the three-dimensional voxel,and representing the relative position of the current event trigger moment in the three-dimensional voxel time dimension, wherein the expression is as follows:
wherein t is i A time stamp representing the i-th event trigger;
the position sequence acquisition module is used for inputting the three-dimensional voxels into a trained target detection network to obtain a position sequence of the target in the current t moments; the method specifically comprises the following steps: inputting the three-dimensional voxels into a trained target detection network, and outputting a detection result sequence of the target in the current t moments; removing abnormal detection results in the detection result sequence to obtain a normal detection result sequence; taking the last n values in the normal detection result sequence as a position sequence of the target in the current t moments, wherein n is a positive integer, and the target detection network is a convolutional neural network;
the target motion track function fitting module is used for fitting the position sequences in the current t moments to obtain the target motion track functions in the current t moments;
a target position correction module for calculating a target position V at the next moment by using the target motion track function C(t+1) And judging whether the target detection network has a detection result at the next moment:
if there is a detection result, and the detected target position V S(t+1) And the calculated target position V C(t+1) If the Euclidean distance of (2) is less than a given thresholdUsing the detected target position V S(t+1) As the target position V after correction at the next time (t+1) Otherwise, adopting the calculated target position V C(t+1) As the target position V after correction at the next time (t+1)
If no detection result is obtained, the calculated target position V is adopted C(t+1) As the target position V after correction at the next time (t+1)
Further comprises:
a position sequence updating module for using the corrected target position V in a first-in first-out manner (t+1) Updating the position sequence of the target in the current t moments;
and the target tracking module is used for repeatedly executing the target motion track function fitting module, the target position correction module and the position sequence updating module to update the target tracking track.
5. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-3.
CN202310239102.8A 2023-03-07 2023-03-07 Space target detection tracking method, system and storage medium based on event camera Active CN116363163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310239102.8A CN116363163B (en) 2023-03-07 2023-03-07 Space target detection tracking method, system and storage medium based on event camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310239102.8A CN116363163B (en) 2023-03-07 2023-03-07 Space target detection tracking method, system and storage medium based on event camera

Publications (2)

Publication Number Publication Date
CN116363163A CN116363163A (en) 2023-06-30
CN116363163B true CN116363163B (en) 2023-11-14

Family

ID=86906171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310239102.8A Active CN116363163B (en) 2023-03-07 2023-03-07 Space target detection tracking method, system and storage medium based on event camera

Country Status (1)

Country Link
CN (1) CN116363163B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958142B (en) * 2023-09-20 2023-12-15 安徽大学 Target detection and tracking method based on compound eye event imaging and high-speed turntable
CN118138904B (en) * 2024-03-07 2024-12-10 华中科技大学 Tailing effect suppression method, system and storage medium of event camera
CN118823682B (en) * 2024-09-13 2024-12-20 中核国电漳州能源有限公司 Method and system for monitoring tiny falling objects enhanced by plane laser

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110770790A (en) * 2017-06-14 2020-02-07 祖克斯有限公司 Voxel-based ground plane estimation and object segmentation
CN112561966A (en) * 2020-12-22 2021-03-26 清华大学 Sparse point cloud multi-target tracking method fusing spatio-temporal information
WO2021072696A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Target detection and tracking method and system, and movable platform, camera and medium
CN114140656A (en) * 2022-02-07 2022-03-04 中船(浙江)海洋科技有限公司 Marine ship target identification method based on event camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110770790A (en) * 2017-06-14 2020-02-07 祖克斯有限公司 Voxel-based ground plane estimation and object segmentation
WO2021072696A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Target detection and tracking method and system, and movable platform, camera and medium
CN112561966A (en) * 2020-12-22 2021-03-26 清华大学 Sparse point cloud multi-target tracking method fusing spatio-temporal information
CN114140656A (en) * 2022-02-07 2022-03-04 中船(浙江)海洋科技有限公司 Marine ship target identification method based on event camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
3d fully convolutional network for vehicle detection in point cloud;LI B, et al;《IEEE Access》;1513-1518 *
基于三维点云的无人驾驶系统车辆检测;丰超;《激光杂志》;61-69 *

Also Published As

Publication number Publication date
CN116363163A (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN116363163B (en) Space target detection tracking method, system and storage medium based on event camera
CN110692083B (en) Block-matched optical flow and stereoscopic vision for dynamic vision sensor
CN106846359B (en) Moving target rapid detection method based on video sequence
KR102070562B1 (en) Event-based image processing device and method thereof
CN111192293B (en) Moving target pose tracking method and device
CN112880687B (en) Indoor positioning method, device, equipment and computer readable storage medium
CN109903372B (en) Depth map super-resolution completion method and high-quality three-dimensional reconstruction method and system
CN114693785A (en) Target positioning method, system and related equipment
KR20150121179A (en) Real time stereo matching
CN107093188A (en) A kind of intelligent linkage and tracking based on panoramic camera and high-speed ball-forming machine
CN108521554A (en) Multi-target collaborative tracking method for large scenes, intelligent monitoring system, traffic system
CN109211277A (en) The state of vision inertia odometer determines method, apparatus and electronic equipment
CN108171728B (en) Markless moving object posture recovery method and device based on hybrid camera system
CN107403451B (en) Adaptive binary feature monocular visual odometry method and computer and robot
CN111899276A (en) SLAM method and system based on binocular event camera
CN113012196B (en) Positioning method based on information fusion of binocular camera and inertial navigation sensor
CN115375581A (en) Dynamic visual event stream noise reduction effect evaluation method based on event time-space synchronization
CN112652020A (en) Visual SLAM method based on AdaLAM algorithm
CN116824080A (en) Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion
CN106778899B (en) Rapid mutual information image matching method based on statistical correlation
CN105957060B (en) A kind of TVS event cluster-dividing method based on optical flow analysis
CN114972514A (en) SLAM positioning method, device, electronic equipment and readable storage medium
CN111027387B (en) Method, device and storage medium for acquiring person number evaluation and evaluation model
CN113916213B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
Lee et al. Low-latency and scene-robust optical flow stream and angular velocity estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant