CN114241008B - A Long-term Region Tracking Method Adapting to Scene and Object Variations - Google Patents

A Long-term Region Tracking Method Adapting to Scene and Object Variations Download PDF

Info

Publication number
CN114241008B
CN114241008B CN202111573298.1A CN202111573298A CN114241008B CN 114241008 B CN114241008 B CN 114241008B CN 202111573298 A CN202111573298 A CN 202111573298A CN 114241008 B CN114241008 B CN 114241008B
Authority
CN
China
Prior art keywords
tracking
target
frame
fine
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111573298.1A
Other languages
Chinese (zh)
Other versions
CN114241008A (en
Inventor
李波
辛明
张贵伟
张超
刘偲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111573298.1A priority Critical patent/CN114241008B/en
Publication of CN114241008A publication Critical patent/CN114241008A/en
Application granted granted Critical
Publication of CN114241008B publication Critical patent/CN114241008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a long-term region tracking method adaptive to scene and target changes, which comprises the following steps: the tracking position is quickly and accurately predicted in a mode of combining rough and fine tracking and center position correction; multi-thread research and judgment, wherein the tracking confidence coefficient is corrected by combining the apparent similarity of the target, the multi-frame interval constraint is considered, the tracking result is comprehensively researched and judged, the tracking false alarm is eliminated, and the reliability of the tracking result is improved; and (4) flexible tracking, namely updating a tracking area in a self-adaptive manner when the tracking is successful according to a tracking research and judgment result, and re-capturing the target when the tracking is failed. And when the tracking result is judged to be successful, adapting to the weak change between frames by adopting a frame-by-frame weighting updating mode at short-time small scene intervals, and adapting to the apparent significant change of a large-interval target through an active restarting module at long-time intervals. The method has the advantages of fast tracking response, high precision and reliable confidence coefficient.

Description

一种适应场景和目标变化的长时区域跟踪方法A Long-term Region Tracking Method Adapting to Scene and Object Variations

技术领域technical field

本发明属于数字图像处理技术领域,重点解决成像设备运动情况下的特定目标长时跟踪问题。该算法能够主动适应目标尺度及表观变化且具有很小的计算量,可以很好地部署在各种低性能硬件平台上。The invention belongs to the technical field of digital image processing, and focuses on solving the problem of long-term tracking of a specific target under the condition of imaging equipment moving. The algorithm can actively adapt to the target scale and appearance changes and has a small amount of calculation, which can be well deployed on various low-performance hardware platforms.

背景技术Background technique

目标跟踪的主要目的是模仿生理视觉系统的运动感知功能,在获得待跟踪的感兴趣目标初始状态后,对成像设备后续捕获到的图像序列进行分析,计算目标在每一帧图像中的位置,获得目标在特定时空范围内的运动轨迹,为目标识别、目标行为分析、三维重建等高级处理和应用提供技术保障。近年来,目标跟踪技术得到了持续快速发展,已广泛应用于国防军事领域,如战场侦察及监视、边境巡逻、重点目标定位校射及打击、电子战以及毁伤评估。该技术在安防监控、交通运行、航空摄影、灾情监视等民用领域也得到了广泛的应用。The main purpose of target tracking is to imitate the motion perception function of the physiological visual system. After obtaining the initial state of the target of interest to be tracked, analyze the subsequent image sequence captured by the imaging device and calculate the position of the target in each frame of the image. Obtain the trajectory of the target within a specific space-time range, and provide technical support for advanced processing and applications such as target recognition, target behavior analysis, and 3D reconstruction. In recent years, target tracking technology has been developed continuously and rapidly, and has been widely used in national defense and military fields, such as battlefield reconnaissance and surveillance, border patrol, key target positioning, calibration and strike, electronic warfare and damage assessment. This technology has also been widely used in civil fields such as security monitoring, traffic operation, aerial photography, and disaster monitoring.

一种适应场景和目标变化的长时区域跟踪方法主要针对无人机监控系统或高空瞭望系统,监控目标为特定的刚体目标,监控距离能达到几公里甚至十几公里,为了满足全天时全天候需求,一般安装红外监控设备,为了锁定目标,搭载设备或云台可能会发生各种姿态变换。在这种应用模式下,目标跟踪的难点体现在以下几方面:(1)从目标自身成像来说表观变化大。在拍摄距离较远时,受限于红外成像本身局限性,目标所占像素少,缺乏足够纹理信息,表观特征较弱。在锁定目标持续跟踪过程中不可避免的会出现自身表观成像变化,这包括成像设备与目标距离改变引起的目标尺度变化与外观细节变化,也包括目标由于姿态或运行轨迹发生改变造成的表观变化。(2)从外界环境来说,远距离拍摄时相似目标在运行过程中可能会相互干扰相互遮挡,另外由于俯拍视角,目标也很容易淹没在周围背景中;最后成像平台运动也可能会引起运动模糊、甚至造成目标移出视野。(3)从跟踪算法自身局限上来说,在当前实时目标跟踪算法中,不管是产生式模型还是判别式模型,他们的依据都是在同一段视频中,相同的物体在前后两帧中的尺寸和空间位置不会发生巨大的变化,基于给定的目标模板或训练好的分类器在下一帧中确定目标的位置。但是跟踪相对于其他视觉任务的本质区别在于能够适应运动中目标逐渐变化的能力。所以对于视觉跟踪来说,在线更新起着至关重要的作用。然而,在线更新在平衡动态信息描述和意外噪声引入方面也是一把双刃剑,长时间积累误差,在目标消失时收集不适当的样本或对可用数据过度拟合,容易降低跟踪器的性能,导致跟踪漂移。A long-term area tracking method that adapts to scene and target changes is mainly aimed at UAV monitoring systems or high-altitude observation systems. The monitoring target is a specific rigid body target, and the monitoring distance can reach several kilometers or even more than ten kilometers. Infrared monitoring equipment is generally installed. In order to lock the target, various posture changes may occur on the equipped equipment or the gimbal. In this application mode, the difficulty of target tracking is reflected in the following aspects: (1) From the perspective of the target's own imaging, the apparent change is large. When the shooting distance is long, limited by the limitations of infrared imaging itself, the target occupies fewer pixels, lacks sufficient texture information, and has weaker apparent features. In the process of continuous tracking of the locked target, there will inevitably be changes in its own apparent imaging, including changes in the target scale and appearance details caused by changes in the distance between the imaging device and the target, as well as changes in the target’s appearance due to changes in attitude or trajectory. Variety. (2) From the perspective of the external environment, similar targets may interfere with each other and block each other during operation during long-distance shooting. In addition, due to the overhead shooting angle, the target is also easily submerged in the surrounding background; finally, the movement of the imaging platform may also cause Motion blur, even causing objects to move out of view. (3) In terms of the limitations of the tracking algorithm itself, in the current real-time target tracking algorithm, whether it is a generative model or a discriminative model, their basis is in the same video, the size of the same object in the two frames before and after and the spatial position will not change dramatically, and the position of the target is determined in the next frame based on a given target template or a trained classifier. But the essential difference between tracking and other vision tasks is the ability to adapt to gradual changes in moving objects. So for visual tracking, online updates play a crucial role. However, online updating is also a double-edged sword in balancing dynamic information description and unexpected noise introduction, accumulating errors for a long time, collecting inappropriate samples when the target disappears or overfitting the available data, which easily degrades the performance of the tracker, lead to tracking drift.

在实际跟踪过程中,尽管目前许多跟踪算法在建立外观模型和鲁棒跟踪方面取得了明显进展,但目标跟踪在诸多实际困难面前仍然是一个非常复杂的问题。相比于近年来大热的深度学习算法,在实时处理系统中,相关滤波跟踪算法凭借其在目标跟踪准确性、对目标表观变化鲁棒以及速度上的优秀表现,成为当前研究的热点。该方法通过学习一个滤波模板,将下一帧的图像和滤波模板进行卷积运算,根据输出响应来预测目标位置,在实际计算中使用FFT将图像的卷积运算转换为频域中的点乘,大大降低了计算复杂度,在利用原始像素值描述目标时能够达到每秒几百帧的处理速度。为了满足实时处理需求,并能够在完成基本的位置预测后还能够对预测结果研判,并当目标丢失后重新找回目标,本发明在相关滤波框架下提出一种自适应更新的长时目标跟踪方法,重点解决以下技术问题。In the actual tracking process, although many tracking algorithms have made significant progress in building appearance models and robust tracking, target tracking is still a very complicated problem in the face of many practical difficulties. Compared with the popular deep learning algorithms in recent years, in real-time processing systems, the correlation filter tracking algorithm has become a current research hotspot due to its excellent performance in target tracking accuracy, robustness to target apparent changes, and speed. This method learns a filter template, performs convolution operation on the image of the next frame and the filter template, predicts the target position according to the output response, and uses FFT to convert the convolution operation of the image into a point product in the frequency domain. , which greatly reduces the computational complexity, and can achieve a processing speed of hundreds of frames per second when using the original pixel value to describe the target. In order to meet the real-time processing requirements, and to be able to judge the prediction results after completing the basic position prediction, and to retrieve the target after the target is lost, the present invention proposes an adaptive update long-term target tracking under the framework of correlation filtering method, focusing on solving the following technical problems.

(1)预测位置精确:针对远距离红外成像中目标弱小,容易淹没在周围背景中的现实问题,如果仅局限于目标自身表观进行目标定位,当前基于置信度最大响应值一次确定目标位置的方法很容易造成跟踪位置偏移。本发明需要能够充分利用目标与周围背景信息,精确确定目标最终位置,保证跟踪结果的精确性。(1) Accurate predicted position: In view of the fact that the target is weak and easily submerged in the surrounding background in long-distance infrared imaging, if the target location is only limited to the target's own appearance, the current method of determining the target position at one time based on the maximum response value of confidence method can easily cause tracking position offset. The present invention needs to be able to make full use of the target and surrounding background information, accurately determine the final position of the target, and ensure the accuracy of the tracking result.

(2)跟踪结果可信:针对外界各种干扰引起的目标漂移问题,其最主要原因是跟踪算法对其跟踪结果不能进行可靠的判定致使模型更新机制存在问题,当前这些算法进行在线更新时,一般使用目标历史跟踪结果作为训练样本,其中隐含地假设了目标跟踪结果是正确的或误差较小,但实际上我们很难保证跟踪结果不出现错误。一个鲁棒的跟踪器应该能够通过外部各种物理约束对跟踪结果进行多手段验证,保证跟踪结果的可靠性。(2) Tracking results are credible: For the target drift problem caused by various external disturbances, the main reason is that the tracking algorithm cannot reliably determine the tracking results, which leads to problems in the model update mechanism. Currently, when these algorithms are updated online, Generally, the target historical tracking results are used as training samples, which implicitly assumes that the target tracking results are correct or the error is small, but in fact it is difficult for us to guarantee that the tracking results will not be wrong. A robust tracker should be able to verify the tracking results through various external physical constraints to ensure the reliability of the tracking results.

(3)更新内容正确:针对目标表观变化的现实问题,现有跟踪器都是在首帧确定跟踪目标对象后,采用逐帧更新的方法适应后期目标对象姿态或尺度的动态变化,但是这种固定跟踪目标对象的更新方式适用于目标在视场内的表观变化缓慢且运动轨迹具有连续性的场景,不能彻底解决剧烈尺度变化问题。当跟踪对象超出视野,只有目标局部留于视场中时,这种更新方式由于边缘填充会造成跟踪漂移,同时计算量也大幅增加。本发明需要跟踪器在整个跟踪过程中不断自适应目标模板更新,适应跟踪过程中目标的剧烈尺度与表观变化,并且能够保证跟踪速度与精度。(3) The update content is correct: In view of the practical problem of target appearance changes, the existing trackers adopt the frame-by-frame update method to adapt to the dynamic changes of the target object's pose or scale after the first frame is determined to track the target object, but this This update method of fixed tracking target object is suitable for the scene where the apparent change of the target in the field of view is slow and the motion trajectory is continuous, and it cannot completely solve the problem of drastic scale change. When the tracking object exceeds the field of view and only the target part remains in the field of view, this update method will cause tracking drift due to edge filling, and the amount of calculation will also increase significantly. The present invention requires the tracker to continuously adapt to update the target template during the whole tracking process, to adapt to the drastic scale and appearance changes of the target during the tracking process, and to ensure the tracking speed and precision.

发明内容Contents of the invention

为了能够实现连续视角下广域长时的目标跟踪,本发明提出一种适应场景和目标变化的长时区域跟踪方法。In order to realize wide-area and long-term target tracking under continuous viewing angles, the present invention proposes a long-term area tracking method that adapts to scene and target changes.

为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

一种适应场景和目标变化的长时区域跟踪方法,包括如下步骤:A long-term area tracking method adapted to scene and target changes, comprising the following steps:

基于粗精跟踪与中心位置矫正三级结合的方式对无人机监控系统或高空瞭望系统采集的图像进行目标位置跟踪;Based on the three-level combination of coarse and fine tracking and center position correction, the target position is tracked on the images collected by the UAV monitoring system or the high-altitude observation system;

计算目标位置跟踪置信度,并基于目标表观相似度对目标位置跟踪置信度进行修正,结合多帧间隔约束,对修正后的跟踪结果进行综合研判;Calculate the target position tracking confidence, and correct the target position tracking confidence based on the apparent similarity of the target, combined with the multi-frame interval constraints, comprehensively judge the corrected tracking results;

根据综合研判结果,在跟踪成功时自适应更新跟踪模型,在跟踪失败时重新捕获目标。According to the comprehensive research and judgment results, the tracking model is adaptively updated when the tracking is successful, and the target is recaptured when the tracking fails.

如上述所述的一种适应场景和目标变化的长时区域跟踪方法,其特征在于,基于粗精跟踪与中心位置矫正三级结合的方式对无人机监控系统或高空瞭望系统采集的图像进行目标位置跟踪包括:As described above, a long-term area tracking method adapting to scene and target changes is characterized in that the images collected by the UAV monitoring system or the high-altitude observation system are processed based on the three-level combination of coarse and fine tracking and center position correction. Object location tracking includes:

根据上帧目标位置Pt-1在本帧对应位置选取与粗跟踪滤波器模板大小相同的图像块作为粗跟踪搜索区域进行粗跟踪搜索,获得目标的初步位置估计Pc,如果本次跟踪响应图对应该位置的值高于阈值thrρc则确定精跟踪搜索中心点位置为此点,否则仍采用Pt-1作为本帧精跟踪搜索的中心点位置;According to the target position P t-1 in the previous frame, select an image block with the same size as the rough tracking filter template at the corresponding position in this frame as the rough tracking search area to perform rough tracking search, and obtain the preliminary position estimate P c of the target. If this tracking response If the value corresponding to the position in the graph is higher than the threshold thr ρc , it is determined that the position of the center point of the fine tracking search is this point, otherwise P t-1 is still used as the center point position of the fine tracking search of this frame;

在上述确定的精跟踪搜索中心点位置选取与上帧精跟踪滤波器模板大小相同的图像块作为精跟踪搜索区域进行精跟踪搜索获得精跟踪位置Pf,如果本次跟踪响应图对应该位置的值高于阈值thrρc,接受精跟踪搜索结果,转入下一步,否则本帧跟踪失败,进行丢失重捕;Select an image block with the same size as the fine tracking filter template in the previous frame at the center point of the fine tracking search determined above as the fine tracking search area to perform fine tracking search to obtain the fine tracking position P f , if the tracking response map corresponds to the position If the value is higher than the threshold thr ρc , accept the fine tracking search result and go to the next step; otherwise, the tracking of this frame fails and the lost recapture is performed;

在精跟踪位置Pf上及其周围选定与图像表观模板Ta大小相同的图像区域,分别与图像表观模板Ta进行平均绝对差算法MAD,取相似度最大的位置点作为最终跟踪位置PtSelect an image area with the same size as the image apparent template T a on and around the fine tracking position Pf , and perform the mean absolute difference algorithm MAD with the image apparent template T a respectively, and take the position point with the largest similarity as the final tracking position Pt .

如上述所述的一种适应场景和目标变化的长时区域跟踪方法,其特征在于,计算目标位置跟踪置信度,并基于目标表观相似度对目标位置跟踪置信度进行修正包括:A long-term area tracking method that adapts to scene and target changes as described above, is characterized in that calculating the target position tracking confidence, and correcting the target position tracking confidence based on the target apparent similarity includes:

对跟踪响应图如公式(1)所示计算峰值旁瓣比PSR,记作psrcur,反应主峰相对旁瓣的强度,公式中Fmax是峰值的响应值,μsub和σsub是旁瓣的均值与标准差;For the tracking response graph, calculate the peak side lobe ratio PSR as shown in formula (1), denoted as psr cur , which reflects the intensity of the main peak relative to the side lobe. In the formula, F max is the response value of the peak, and μ sub and σ sub are the side lobe mean and standard deviation;

Figure BDA0003424500750000031
Figure BDA0003424500750000031

计算当前帧峰值旁瓣比psrcur与历史最近连续成功M帧跟踪响应PSR均值psravg的比值,反映PSR的震荡程度,确定当前帧的目标位置跟踪置信度ρcCalculate the ratio of the peak side lobe ratio psr cur of the current frame to the average value psr avg of the PSR tracking response of the recent consecutive successful M frames in history, reflecting the degree of shock of the PSR, and determining the target position tracking confidence ρ c of the current frame;

Figure BDA0003424500750000032
Figure BDA0003424500750000032

计算以最终跟踪位置Pt为中心的5*5区域内图像块与图像表观模板Ta图像表观相似度MAD值,将当前帧的MAD值与历史最近连续成功M帧跟踪结果的MAD均值作比获得归一化后图像表观相似度ρaCalculate the MAD value of the image apparent similarity between the image block and the image apparent template T a in the 5*5 area centered on the final tracking position P t , and compare the MAD value of the current frame with the MAD mean value of the latest consecutive successful M frame tracking results in history Compare to obtain the normalized image apparent similarity ρ a ;

目标位置跟踪置信度ρc与图像表观相似度ρa加权平均得到修正后的当前跟踪置信度ρ。The weighted average of target position tracking confidence ρ c and image apparent similarity ρ a is the corrected current tracking confidence ρ.

如上述所述的一种适应场景和目标变化的长时区域跟踪方法,其特征在于,多帧间隔约束阈值由历史轨迹中最近连续成功N帧的帧间位移变化加上常量值c共同决定,当跟踪置信度ρ大于阈值thra且当前帧与前N帧间隔位移小于多帧间隔约束阈值thrm时判定跟踪结果正确,否则跟踪失败。As described above, a long-term area tracking method adapting to scene and target changes is characterized in that the multi-frame interval constraint threshold is jointly determined by the inter-frame displacement change of the most recent consecutive successful N frames in the historical trajectory plus a constant value c, When the tracking confidence ρ is greater than the threshold thr a and the interval displacement between the current frame and the previous N frames is smaller than the multi-frame interval constraint threshold thr m , it is determined that the tracking result is correct, otherwise the tracking fails.

如上述所述的一种适应场景和目标变化的长时区域跟踪方法,其特征在于,在跟踪成功时自适应更新跟踪区域包括:A long-term area tracking method adapting to scene and target changes as described above, characterized in that, when the tracking is successful, adaptively updating the tracking area includes:

连续成功跟踪次数小于N帧时,粗跟踪滤波器模和精跟踪滤波器模板逐帧加权更新以适应帧间微弱差异;When the number of consecutive successful tracking is less than N frames, the rough tracking filter template and the fine tracking filter template are weighted and updated frame by frame to adapt to the weak difference between frames;

连续成功跟踪次数等于N帧时,利用当前帧跟踪结果重新初始化粗跟踪滤波器模、精跟踪滤波器模板与图像表观模板Ta,以适应目标表观显著变化;When the number of consecutive successful tracking is equal to N frames, use the tracking results of the current frame to re-initialize the rough tracking filter template, fine tracking filter template and image appearance template T a to adapt to the apparent change of the target;

在重新初始化粗跟踪滤波器模、精跟踪滤波器模板时跟踪区域与搜索区域大小的确定方法同时考虑目标大小与计算速度限制,具体为:When reinitializing the rough tracking filter module and the fine tracking filter template, the determination method of the size of the tracking area and the search area takes into account both the target size and the calculation speed limit, specifically:

根据初始帧0时刻物距d0、焦距f0、拍摄角度θ0与当前帧t时刻的物距dt、焦距ft与拍摄角度θt估计目标膨胀系数,确定目标在当前帧尺度大小,膨胀系数γ的粗估计方法为:According to the object distance d 0 , focal length f 0 , shooting angle θ 0 at the initial frame 0 and the object distance d t , focal length f t and shooting angle θ t at the current frame t, estimate the target expansion coefficient, and determine the size of the target in the current frame. The rough estimation method of the expansion coefficient γ is:

Figure BDA0003424500750000041
Figure BDA0003424500750000041

考虑计算速度限制,当目标成像短边小于等于54像素时,以跟踪点为中心选择短边边长外扩10个像素大小的矩形框作为跟踪区域。当目标成像短边大于54像素时,选择以跟踪点为中心的64*64大小区域作为跟踪区域,精跟踪向外扩展1倍作为搜索区域,粗跟踪向外扩展2倍作为搜索区域,分别创建粗精跟踪滤波器模板。Considering the calculation speed limit, when the short side of the target imaging is less than or equal to 54 pixels, a rectangular frame with a size of 10 pixels outside the length of the short side is selected as the tracking area centered on the tracking point. When the short side of the target image is larger than 54 pixels, select a 64*64 area centered on the tracking point as the tracking area, expand the fine tracking by 1 times as the search area, and expand the rough tracking by 2 times as the search area, respectively create Coarse and fine tracking filter template.

如上述所述的一种适应场景和目标变化的长时区域跟踪方法,其特征在于,跟踪失败时为了能够实现长时跟踪需要进行丢失重捕,主要包括:As described above, a long-term area tracking method adapting to scene and target changes is characterized in that when the tracking fails, in order to achieve long-term tracking, it is necessary to perform lost recapture, mainly including:

从跟踪缓存中依次选取置信度最高的帧按照当前时刻的跟踪区域大小,并扩大二倍搜索区制备滤波器模板在当前帧中重新进行搜索;Select the frame with the highest confidence from the tracking buffer in turn according to the size of the tracking area at the current moment, and expand the search area twice to prepare a filter template to search again in the current frame;

跟踪缓存中存放的是最近判定为成功的原始帧图像、目标位置以及置信度;The tracking buffer stores the original frame image, target position and confidence degree that were recently determined to be successful;

搜索成功后在当前帧重新初始化粗跟踪滤波器模板、精跟踪滤波器模板与图像表观模板;After the search is successful, re-initialize the rough tracking filter template, fine tracking filter template and image appearance template in the current frame;

如果搜索失败继续对下帧重复上述方法搜索,当连续N帧都不能重新获取目标位置后宣告目标丢失,终止目标跟踪程序。If the search fails, continue to repeat the above search method for the next frame. When the target position cannot be reacquired in consecutive N frames, the target is declared lost, and the target tracking program is terminated.

本发明所设计的一种适应场景和目标变化的长时区域跟踪方法具有如下优点:A long-term area tracking method adapted to scene and target changes designed by the present invention has the following advantages:

(1)响应快:采用相关滤波方法进行目标位置预测,算法在嵌入式平台上逐帧正常跟踪处理时间小于10ms。当目标丢失需要重新捕获时处理时间最长不超过20ms。(1) Fast response: The correlation filtering method is used to predict the target position, and the normal tracking processing time of the algorithm frame by frame on the embedded platform is less than 10ms. When the target is lost and needs to be recaptured, the maximum processing time is no more than 20ms.

(2)跟踪准:相对于现有相关滤波跟踪算法,本方法因为采用粗精跟踪结合方式,并能够利用跟踪响应结果与历史跟踪信息进行更新时机判定,与现有的CSK,KCF,以及ECO-HC方法相比,具有跟踪结果准确性高的特点。(2) Tracking accuracy: Compared with the existing correlation filter tracking algorithm, this method uses the combination of rough and fine tracking, and can use the tracking response results and historical tracking information to determine the timing of updating, and is compatible with the existing CSK, KCF, and ECO Compared with the -HC method, it has the characteristics of high accuracy of tracking results.

(3)判断对:本方法联合多种线索,结合时空信息对跟踪结果进行综合研判,在给出目标预测位置的基础上并且提供结果的置信度判断,相对以往方法输出结果更为可靠。(3) Judgment pair: This method combines multiple clues and combines space-time information to conduct comprehensive research and judgment on the tracking results. Based on the predicted location of the target and the confidence judgment of the results, the output results of the method are more reliable than previous methods.

附图说明Description of drawings

下面结合附图和具体实施方式对本发明作进一步的详细说明。The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

图1是本发明适应场景和目标变化的长时区域跟踪方法总体框架;Fig. 1 is the overall framework of the long-term area tracking method adapting to scene and target changes in the present invention;

图2是逐帧跟踪处理流程图;Fig. 2 is a flow chart of frame-by-frame tracking processing;

图3是多帧缓存管理策略;Figure 3 is a multi-frame buffer management strategy;

图4是跟踪目标丢失重捕示意图。Fig. 4 is a schematic diagram of tracking target lost and recaptured.

具体实施方式Detailed ways

前已述及,本发明提出一种适应场景和目标变化的长时区域跟踪方法,下面结合附图清楚完整地描述本发明的具体实施方式。As mentioned above, the present invention proposes a long-term area tracking method that adapts to scene and target changes. The specific implementation manners of the present invention will be clearly and completely described below in conjunction with the accompanying drawings.

本实施例公开了一种适应场景和目标变化的长时区域跟踪方法,整体框架如图1所示:This embodiment discloses a long-term area tracking method that adapts to scene and target changes, and the overall framework is shown in Figure 1:

当初始帧获得待跟踪目标后,进行粗、精跟踪滤波器模板与图像表观模板Ta制备,当后续新的一帧图像到来后,选择候选搜索区域,进行目标位置预测,接着联合多种线索对跟踪结果计算置信度并对目标跟踪置信度进行综合研判,排除跟踪虚警;最后根据研判结果对跟踪模型进行更新。When the target to be tracked is obtained in the initial frame, rough and fine tracking filter templates and image appearance template T a are prepared. When a subsequent new frame of image arrives, the candidate search area is selected to predict the target position, and then a combination of multiple The clue calculates the confidence degree of the tracking result and conducts comprehensive research and judgment on the target tracking confidence degree to eliminate tracking false alarms; finally, the tracking model is updated according to the research and judgment results.

在本实施例中,具体每帧处理流程如图2所示,包括:In this embodiment, the specific processing flow of each frame is shown in Figure 2, including:

步骤1:目标位置预测Step 1: Target location prediction

主要包括两个部分:粗精跟踪搜索与中心位置修正。It mainly includes two parts: coarse and fine tracking search and center position correction.

(a)首先粗跟踪搜索,获取目标初步位置Pc,如果该位置的最大响应值大于可接受的阈值thrρc则在此位置上进行精跟踪,否则直接在上帧位置Pt-1上进行精跟踪。粗精跟踪均采用基于相关滤波的跟踪原理,具体做法根据上帧目标位置在本帧对应位置选取与跟踪滤波器模板(包括粗跟踪滤波器模板和精跟踪滤波器模板)大小相同的图像块作为搜索区域,提取特征并加余弦窗消除边界影响,假定该搜索区域特征为z,如公式1所示与上一帧目标特征模型x进行运算得到核矩阵k;再如公式2所示利用上一帧计算的滤波器参数a与核矩阵k进行频域内的点乘运算并将计算结果通过傅里叶逆变换F-1到时域下,得到的就是相关滤波运算后的响应图y;最后通过搜索响应图y中的最大值Fmax和对应坐标(px,py),即可计算得到当前帧中目标最大概率出现的位置。(a) First, rough tracking search is performed to obtain the initial target position P c , if the maximum response value of this position is greater than the acceptable threshold thr ρc , fine tracking is performed at this position, otherwise, it is directly performed at the position P t-1 of the last frame fine tracking. Both coarse and fine tracking adopt the tracking principle based on correlation filtering. The specific method is to select an image block with the same size as the tracking filter template (including the rough tracking filter template and the fine tracking filter template) at the corresponding position of the frame according to the target position in the previous frame as Search the area, extract features and add a cosine window to eliminate the influence of the boundary. Assume that the search area feature is z, as shown in formula 1, perform operations with the target feature model x of the previous frame to obtain the kernel matrix k; then as shown in formula 2, use the previous The filter parameter a of the frame calculation is multiplied by the kernel matrix k in the frequency domain, and the calculation result is transferred to the time domain through the inverse Fourier transform F -1 , and the response graph y after the correlation filtering operation is obtained; finally, through Search for the maximum value Fmax and the corresponding coordinates (px, py) in the response graph y to calculate the position where the target appears with the highest probability in the current frame.

Figure BDA0003424500750000061
Figure BDA0003424500750000061

y=F-1(F(a)·F(k)) (2)y=F -1 (F(a)·F(k)) (2)

此过程中粗精跟踪都采用相关滤波方法,两者差别在于选定跟踪区域后,搜索区域大小设置不同,具体大小根据实际情况设定初始值,同时为了保证速度,粗跟踪对搜索区域进行基于最近邻插值的下采样操作。In this process, both coarse and fine tracking adopt the correlation filtering method. The difference between the two is that after the tracking area is selected, the size of the search area is set differently. The specific size is set to the initial value according to the actual situation. Downsampling operation for nearest neighbor interpolation.

(b)当精跟踪位置Pf对应的最大响应值大于可接受的阈值thrρc时,在此位置上利用图像表观相似度进行中心位置矫正。具体方法是在当前帧位置点Pf和周围上下左右选择四个候选点P1,P2,P3和P4。分别以这5个点为中心选取与图像表观模板Ta大小相同的图像区域,在本实施例中选取5*5大小区域形成新的目标表观图像S,S1,S2,S3和S4,然后分别与图像表观模板Ta进行基于灰度的图像匹配算法MAD(平均绝对差),取相似度最大的位置点作为最终跟踪点取代Pf作为跟踪器最终的跟踪结果Pt。MAD计算如公式(3)所示:(b) When the maximum response value corresponding to the fine tracking position P f is greater than the acceptable threshold thr ρc , use the image apparent similarity to correct the center position at this position. The specific method is to select four candidate points P1, P2, P3 and P4 at the current frame position point P f and its surroundings up, down, left, and right. Select an image area with the same size as the image apparent template T a centered on these 5 points, in this embodiment, select a 5*5 size area to form new target apparent images S, S1, S2, S3 and S4, Then, the grayscale-based image matching algorithm MAD (mean absolute difference) is carried out with the image appearance template T a respectively, and the position point with the largest similarity is taken as the final tracking point instead of P f as the final tracking result P t of the tracker. The calculation of MAD is shown in formula (3):

Figure BDA0003424500750000062
Figure BDA0003424500750000062

其中:M=N=5,d表示图像块S与图像表观模板T的像素值平均绝对差。Wherein: M=N=5, d represents the average absolute difference of pixel values between the image block S and the image apparent template T.

步骤2:置信度修正Step 2: Confidence Correction

包括两个部分:跟踪滤波器自身响应时空分析与跟踪区域图像表观相似度计算。It includes two parts: the spatial-temporal analysis of the tracking filter's own response and the calculation of the apparent similarity of the tracking area image.

跟踪器自身响应时空分析首先对相关滤波跟踪算法求解的跟踪响应图计算峰值旁瓣比(Peak to Sidelobe Ratio,PSR),记作psrcur。PSR可以用来表示相关峰强度。将相关性输出g分为两部分:峰值和旁瓣,峰值即为最大值,旁瓣选择以峰值为中心11*11区域。主要反应了主峰相对旁瓣的比值,当旁瓣均值较大或分布不均时,会降低PSR的值。具体计算为公式(4)所示:Spatio-temporal analysis of the tracker's own response First, calculate the Peak to Sidelobe Ratio (PSR) from the tracking response map solved by the correlation filter tracking algorithm, denoted as psr cur . PSR can be used to represent the correlation peak intensity. The correlation output g is divided into two parts: the peak value and the side lobe, the peak value is the maximum value, and the side lobe is selected as a 11*11 area centered on the peak value. It mainly reflects the ratio of the main peak to the side lobe. When the mean value of the side lobe is large or the distribution is uneven, the value of the PSR will be reduced. The specific calculation is shown in formula (4):

Figure BDA0003424500750000071
Figure BDA0003424500750000071

Fmax是峰值的响应值,μsub和σsub是旁瓣的均值与标准差。F max is the peak response value, μ sub and σ sub are the mean and standard deviation of the side lobes.

跟踪响应图的变化在正常跟踪时不会发生激烈的变化,只有当目标发生遮挡或丢失时才会产生激烈变化,所以历史跟踪信息也能给当前跟踪置信度的计算提供参考。本方法利用历史有效帧跟踪结果的均值对峰值旁瓣比归一化表示如公式(5)所示,以此表示目标位置跟踪置信度ρc,这种方法用当前PSR值psrcur与历史有效帧PSR均值psravg之比作为最终跟踪滤波器置信度指标,优点在于可以自适应每段测试场景,有效克服固定阈值的泛化性差等缺点。The changes in the tracking response graph will not change drastically during normal tracking, but only when the target is occluded or lost, so the historical tracking information can also provide a reference for the calculation of the current tracking confidence. In this method, the mean value of the historical effective frame tracking results is used to normalize the peak side lobe ratio, as shown in formula (5), to represent the target position tracking confidence ρ c , this method uses the current PSR value psr cur and the historical effective The ratio of the frame PSR average value psr avg is used as the final tracking filter confidence index. The advantage is that it can adapt to each test scene and effectively overcome the disadvantages of poor generalization of the fixed threshold.

Figure BDA0003424500750000072
Figure BDA0003424500750000072

基于图像表观相似度的跟踪置信度计算方法:对前面计算出来的MAD进行归一化,具体方法为计算历史最近连续成功M帧跟踪结果的MAD平均值,在本实施例中具体取前三帧(M=3)跟踪结果的MAD计算平均值davg作为当前跟踪图像表观的基准相似度值,然后当前帧算出来的跟踪点MAD值dcur与基准相似度值davg作比,如公式(6)所示。Calculation method of tracking confidence based on image apparent similarity: normalize the previously calculated MAD. The specific method is to calculate the MAD average value of the latest consecutive successful M frame tracking results in history. In this embodiment, the first three are specifically selected. The MAD calculation average value d avg of the frame (M=3) tracking results is used as the apparent reference similarity value of the current tracking image, and then the tracking point MAD value d cur calculated in the current frame is compared with the reference similarity value d avg , such as Formula (6) shows.

Figure BDA0003424500750000073
Figure BDA0003424500750000073

目标位置跟踪置信度ρc与图像表观相似度ρa加权平均得到修正后的当前跟踪置信度ρ,具体计算如公式(7)所示。The weighted average of target position tracking confidence ρ c and image apparent similarity ρ a is the corrected current tracking confidence ρ, and the specific calculation is shown in formula (7).

ρ=0.5*ρc+0.5*ρa (7)ρ=0.5*ρ c +0.5*ρ a (7)

步骤3:跟踪结果研判Step 3: Research and judge the tracking results

结合跟踪置信度与目标跟踪历史轨迹信息对跟踪结果研判。Combining the tracking confidence and the historical track information of the target tracking to judge the tracking results.

多帧间隔约束具体指跟踪目标在多帧间的运动位移约束,根据历史连续跟踪成功帧N的帧间位移变化推算当前帧与历史帧的预测位移偏差阈值。为了实现本策略,需要存储历史最近连续多帧(比如N帧,本发明中N=5)的跟踪数据。采用三个缓存队列存储数据,具体存储与更新策略如图3所示,具体描述如下:The multi-frame interval constraint specifically refers to the movement displacement constraint of the tracking target between multiple frames, and calculates the predicted displacement deviation threshold between the current frame and the historical frame according to the inter-frame displacement changes of the historical continuous tracking frame N. In order to realize this strategy, it is necessary to store the tracking data of the most recent consecutive frames (such as N frames, N=5 in the present invention). Three cache queues are used to store data. The specific storage and update strategy is shown in Figure 3, and the specific description is as follows:

缓存1:用于计算当前处理N帧的偏差距离,存储最近的N帧跟踪轨迹信息,更新策略为当前帧处理完毕存入缓存,最早的一帧处理结果删除;Cache 1: It is used to calculate the deviation distance of the currently processed N frames, store the tracking track information of the latest N frames, and the update strategy is that the current frame is processed and stored in the cache, and the earliest frame processing result is deleted;

缓存2:用于保存当前阶段的前一阶段的N帧数据,用于更新“缓存3”,更新策略为当“缓存1”第一次存满后,初始化“缓存2”,此后“缓存1”每更新一次,就用“缓存1”里最早的值压入“缓存2”,同时去除缓存2里最早的值。Cache 2: It is used to save the N frame data of the previous stage of the current stage, and is used to update "cache 3". The update strategy is to initialize "cache 2" when "cache 1" is full for the first time, and then "cache 1 "Every update, the earliest value in "cache 1" is pushed into "cache 2", and the earliest value in cache 2 is removed at the same time.

缓存3:用于计算最近的历史预测,存储历史连续最近成功N帧,更新策略为从“缓存2”中复制连续成功的N帧,采用一次计算平均帧间隔位移,可以多次用于其之后帧的预测;计算平均帧间隔位移后对其清空。Cache 3: It is used to calculate the latest historical prediction, store the last consecutive successful N frames in the history, and the update strategy is to copy the consecutive successful N frames from "Cache 2", and calculate the average frame interval displacement once, which can be used multiple times after it Frame prediction; it is cleared after calculating the average frame interval displacement.

基于跟踪置信度和多帧间隔位移约束的最终跟踪结果判断策略是:只有当跟踪置信度ρ大于可以接受的阈值thra(取值0.15)且当前帧与前N帧间隔位移(如果当前帧是t帧,即t帧与t-N帧的位移)小于可以接受的多帧间隔约束阈值thrm时判定跟踪结果正确,否则跟踪失败。The final tracking result judgment strategy based on the tracking confidence and multi-frame interval displacement constraints is: only when the tracking confidence ρ is greater than the acceptable threshold thr a (value 0.15) and the current frame is displaced from the previous N frames (if the current frame is When frame t (that is, the displacement between frame t and frame tN) is less than the acceptable multi-frame interval constraint threshold thr m , it is determined that the tracking result is correct, otherwise the tracking fails.

步骤4:跟踪模型管理Step 4: Track Model Management

当跟踪结果判定本帧跟踪成功时,进行跟踪模型管理。本发明对滤波器模板更新采用逐帧加权更新与阶段重启制备相结合的方式适应目标尺度与表观变化。具体做法是:When the tracking result determines that the frame is successfully tracked, the tracking model management is performed. The present invention adopts a frame-by-frame weighted update and stage restart preparation method for filter template update to adapt to target scale and appearance changes. The specific method is:

首先对连续跟踪成功次数进行计数,根据连续跟踪成功次数判断是否主动重启。First count the number of successful continuous tracking, and judge whether to actively restart according to the number of successful continuous tracking.

如果连续成功跟踪次数小于N,在本实施例中N=5时,为了适应短时小场景帧间微弱差异,采用逐帧加权更新模板的策略。然而,每一帧的结果都用来更新,或者隔帧更新是有风险的,特别是当目标被遮挡,或者跟踪器已经跟的不好的时候,再去更新模型,只会使得跟踪器越来越无法识别目标,本发明只在步骤3判定跟踪成功且跟踪置信度ρ大于阈值thrl(thra<thrl<1,thrl取值0.3)的时候才更新跟踪模型,避免目标模型被污染,减少模型漂移与更新次数,同时提升速度。逐帧更新跟踪模型的计算方法如公式(8),(9)所示。If the number of consecutive successful tracking is less than N, in this embodiment, when N=5, in order to adapt to the slight difference between frames in short-term small scenes, a strategy of weighted update template frame by frame is adopted. However, the results of each frame are used for updating, or updating every other frame is risky, especially when the target is occluded, or the tracker is not following well, then updating the model will only make the tracker worse. It becomes impossible to identify the target. The present invention only updates the tracking model when it is judged that the tracking is successful in step 3 and the tracking confidence ρ is greater than the threshold thr l (thr a <thr l <1, thr l takes a value of 0.3), so as to avoid the target model being Pollution, reducing model drift and update times, while increasing speed. The calculation method of updating the tracking model frame by frame is shown in formulas (8), (9).

F(x)t=(1-r)F(x)t-1+rF(x)t (8)F(x) t =(1-r)F(x) t-1 +rF(x) t (8)

F(a)t=(1-r)F(a)t-1+rF(a)t (9)F(a) t = (1-r)F(a) t-1 +rF(a) t (9)

其中F(·)表示傅里叶变换操作,x表示提取的目标与背景特征,a表示滤波器参数,r表示更新率,r值越大表明当前帧权重越大,但较大的r值很容易使得模板被当前帧污染。因此需要依据跟踪置信度的高低自适应的调整更新率。当前置信度ρ为低(thrl<ρ<thrh,thrh=0.7)时更新率为0.035,置信度ρ为高时(thrh≤ρ≤1)更新率为0.085。Among them, F( ) represents the Fourier transform operation, x represents the extracted target and background features, a represents the filter parameters, and r represents the update rate. The larger the r value, the greater the weight of the current frame, but the larger r value is very It is easy to pollute the template with the current frame. Therefore, it is necessary to adjust the update rate adaptively according to the level of tracking confidence. When the pre-confidence ρ is low (thr l <ρ<thr h , thr h =0.7), the update rate is 0.035, and when the confidence ρ is high (thr h ≤ρ≤1), the update rate is 0.085.

如果连续成功跟踪次数等于N,在本实施例中N=5时,为了适应长时大场景表观巨大变化,本发明在当前帧重新训练滤波器,初始化跟踪参数、踪滤波器模板与目标表观模板。在制备跟踪滤波器模板时,跟踪区域与搜索区域的确定同时考虑目标大小与计算速度限制。根据初始帧0时刻物距d0、焦距f0、拍摄角度θ0与当前帧t时刻的物距dt、焦距ft与拍摄角度θt估计目标膨胀系数,确定目标在当前帧尺度大小,膨胀系数γ的粗估计方法为:If the number of consecutive successful tracking is equal to N, when N=5 in this embodiment, in order to adapt to the huge change in the appearance of the long-term large scene, the present invention retrains the filter in the current frame, initializes the tracking parameters, the tracking filter template and the target table concept template. When preparing the tracking filter template, the determination of the tracking area and the search area takes into account both the target size and the calculation speed limit. According to the object distance d 0 , focal length f 0 , shooting angle θ 0 at the initial frame 0 and the object distance d t , focal length f t and shooting angle θ t at the current frame t, estimate the target expansion coefficient, and determine the size of the target in the current frame. The rough estimation method of the expansion coefficient γ is:

Figure BDA0003424500750000091
Figure BDA0003424500750000091

其次考虑计算速度限制,当目标成像较小时(短边小于等于54像素),以跟踪点为中心选择短边边长外扩10个像素大小的矩形框作为跟踪区域,当目标成像变大时(短边大于54像素),选择以跟踪点为中心的64*64固定大小的区域作为跟踪区域。粗、精跟踪分别向外扩展1倍、2倍作为搜索区域创建粗精跟踪滤波器模板。Secondly, consider the calculation speed limit. When the target image is small (short side is less than or equal to 54 pixels), select a rectangular frame with a short side length of 10 pixels as the center of the tracking point as the tracking area. When the target image becomes larger ( The short side is greater than 54 pixels), select a 64*64 fixed-size area centered on the tracking point as the tracking area. Coarse and fine tracking are expanded outward by 1 time and 2 times respectively as the search area to create a rough and fine tracking filter template.

目标表观模板选取跟踪点为中心5*5的图像区域。The target appearance template selects a 5*5 image area with the tracking point as the center.

步骤5:丢失重补Step 5: Lost Recover

当跟踪结果判定为本帧跟踪失败时,需要进行丢失重捕工作,首先判断是否停止跟踪,即判断是否已经连续多帧跟踪失败,如果已经连续N=5帧失败则结束跟踪过程。否则利用历史成功跟踪信息进行失败状态下的重启跟踪进行目标丢失重捕。具体方法:When the tracking result is judged to be the tracking failure of this frame, it is necessary to carry out the lost and recaptured work. First, judge whether to stop tracking, that is, judge whether continuous multi-frame tracking has failed, and then end the tracking process if continuous N=5 frames have failed. Otherwise, use the historical successful tracking information to restart the tracking in the failed state to recapture the lost target. specific method:

从跟踪缓存中依次选取置信度最高的帧按照当前时刻的跟踪区域大小,并扩大二倍搜索区域制备跟踪滤波器模板在当前帧中重新进行搜索;Select the frame with the highest confidence from the tracking buffer in turn according to the size of the tracking area at the current moment, and expand the search area twice to prepare a tracking filter template to search again in the current frame;

对搜索结果计算置信度ρ与多帧间隔位移,并按照步骤3中的策略进行跟踪结果研判,如果搜索成功后在当前帧按照步骤4跟踪区域与搜索区域设置方法重新初始化粗精跟踪滤波器模板与图像表观模板;Calculate the confidence degree ρ and multi-frame interval displacement for the search results, and judge the tracking results according to the strategy in step 3. If the search is successful, re-initialize the rough and fine tracking filter template in the current frame according to the setting method of the tracking area and search area in step 4 with the image appearance template;

如果搜索失败,转入下帧处理。If the search fails, transfer to the next frame for processing.

以上对本发明所提供的模板自适应更新的长时目标跟踪方法作了详细的说明,但显然本发明的具体实现形式并不局限于此。对于本技术领域的一般技术人员来说,在不背离本发明的权利要求范围的情况下对它进行的各种显而易见的改变都在本发明的保护范围之内。The long-term target tracking method for template adaptive update provided by the present invention has been described in detail above, but obviously the specific implementation form of the present invention is not limited thereto. For those skilled in the art, various obvious changes made to it without departing from the scope of the claims of the present invention are within the protection scope of the present invention.

Claims (1)

1. A long-time region tracking method adapting to scene and target changes is characterized by comprising the following steps:
tracking the target position of an image acquired by an unmanned aerial vehicle monitoring system or a high-altitude observation system based on a three-stage combination mode of rough and fine tracking and center position correction;
calculating a tracking confidence coefficient of the target position, correcting the tracking confidence coefficient of the target position based on the apparent similarity of the target, and comprehensively studying and judging the corrected tracking result by combining multi-frame interval constraint;
according to the comprehensive judgment result, the tracking model is updated in a self-adaptive mode when the tracking is successful, and the target is captured again when the tracking is failed;
the self-adaptive updating tracking model when the tracking is successful comprises the following steps:
when the continuous successful tracking times are less than N frames, the coarse tracking filter template and the fine tracking filter template are weighted and updated frame by frame to adapt to weak difference between frames;
when the continuous successful tracking times are equal to N frames, the coarse tracking filter template, the fine tracking filter template and the image apparent template T are reinitialized by using the tracking result of the current frame a To accommodate target apparent significant changes;
the method for determining the sizes of the tracking area and the search area when the coarse tracking filter template and the fine tracking filter template are reinitialized simultaneously considers the target size and the calculation speed limit, and specifically comprises the following steps:
according to the initial frame 0 time object distance d 0 Focal length f 0 And a shooting angle theta 0 Object distance d from current frame t t Focal length f t Angle of incidence theta t Estimating an expansion coefficient of a target, and determining the size of the target in the current frame scale, wherein the rough estimation method of the expansion coefficient gamma comprises the following steps:
Figure FDA0004046332940000011
considering the calculation speed limit, when the short side of the target imaging is smaller than or equal to 54 pixels, selecting a rectangular frame with the length of the short side expanded by 10 pixels as a tracking area by taking a tracking point as the center, when the short side of the target imaging is larger than 54 pixels, selecting a 64 × 64 area with the tracking point as the center as the tracking area, expanding the fine tracking outwards by 1 time as a search area, expanding the coarse tracking outwards by 2 times as the search area, and respectively creating a fine tracking filter template and a coarse tracking filter template;
when the tracking fails, in order to realize the long-term tracking, lost recapture is needed, which mainly comprises:
sequentially selecting the frame with the highest confidence from the tracking cache according to the size of the tracking area at the current moment, expanding the frame by two times to be used as a search area for preparing a filter template, and searching again in the current frame;
tracking the original frame image, the target position and the confidence coefficient which are recently judged to be successful and stored in the cache;
after the search is successful, the coarse tracking filter template, the fine tracking filter template and the image appearance template are initialized again in the current frame;
if the searching fails, the method is continuously repeated for the next frame, and when the target position cannot be obtained again in the continuous N frames, the target loss is declared, and the target tracking program is terminated;
the tracking of the target position of the image acquired by the unmanned aerial vehicle monitoring system or the high-altitude observation system based on the mode of three-stage combination of rough and fine tracking and center position correction comprises the following steps:
according to the target position P of the previous frame t-1 Selecting and coarse tracking filtering at the corresponding position of the frameImage blocks with the same template size are used as coarse tracking search areas for coarse tracking search to obtain a primary position estimate P of the target c If the value of the current tracking response map corresponding to the position is higher than the threshold value thr ρc Determining the position of the fine tracking search center point as the point, otherwise, still adopting P t-1 The position of the central point of the fine tracking search of the current frame is taken as the position of the central point of the fine tracking search of the current frame;
selecting an image block with the same size as the template of the last frame fine tracking filter at the determined position of the fine tracking search central point as a fine tracking search area to perform fine tracking search, and obtaining a fine tracking position P f If the value of the current tracking response map corresponding to the position is higher than the threshold value thr ρc Receiving the fine tracking search result, and turning to the next step, otherwise, losing and recapturing the frame if the tracking of the frame fails;
at the fine tracking position P f Top and surrounding selection and image appearance template T a Image regions of the same size as the image apparent template T a Performing average absolute difference algorithm MAD, and taking the position point with the maximum similarity as the final tracking position P t
Calculating a target position tracking confidence coefficient, and correcting the target position tracking confidence coefficient based on the target apparent similarity comprises the following steps:
the peak-to-side lobe ratio PSR is calculated as shown in equation (1) for the tracking response plot and is designated as PSR cur Reflecting the intensity of the main peak relative to the side lobe, in the formula F max Is the response value of the peak, μ sub And σ sub Is the mean and standard deviation of the sidelobes;
Figure FDA0004046332940000021
calculating the peak sidelobe ratio psr of the current frame cur PSR mean PSR of M frame tracking response with latest continuous success avg The ratio of (a) to (b) reflects the oscillation degree of the PSR, and determines the target position tracking confidence coefficient rho of the current frame c
Figure FDA0004046332940000022
Calculated to finally track position P t Image block and image apparent template T in 5*5 area as center a Comparing the MAD value of the current frame with the MAD average value of the latest continuous successful M frame tracking result to obtain the normalized image apparent similarity rho a
Target position tracking confidence ρ c Apparent similarity to image ρ a Weighted average is carried out to obtain a corrected current tracking confidence coefficient rho;
the multi-frame interval constraint threshold is determined by the interframe displacement change of the latest continuous successful N frames in the historical track and a constant value c, and when the tracking confidence coefficient rho is greater than the threshold thr a And the interval displacement between the current frame and the previous N frames is less than a multi-frame interval constraint threshold thr m And judging that the tracking result is correct, otherwise, failing to track.
CN202111573298.1A 2021-12-21 2021-12-21 A Long-term Region Tracking Method Adapting to Scene and Object Variations Active CN114241008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111573298.1A CN114241008B (en) 2021-12-21 2021-12-21 A Long-term Region Tracking Method Adapting to Scene and Object Variations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111573298.1A CN114241008B (en) 2021-12-21 2021-12-21 A Long-term Region Tracking Method Adapting to Scene and Object Variations

Publications (2)

Publication Number Publication Date
CN114241008A CN114241008A (en) 2022-03-25
CN114241008B true CN114241008B (en) 2023-03-07

Family

ID=80760614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111573298.1A Active CN114241008B (en) 2021-12-21 2021-12-21 A Long-term Region Tracking Method Adapting to Scene and Object Variations

Country Status (1)

Country Link
CN (1) CN114241008B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663462A (en) * 2022-04-07 2022-06-24 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium
CN116958197A (en) * 2023-06-16 2023-10-27 虹软科技股份有限公司 Target tracking method, device, computer storage medium and terminal
CN116563348B (en) * 2023-07-06 2023-11-14 中国科学院国家空间科学中心 Multi-modal tracking method and system for weak and small infrared targets based on dual feature templates

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335293A (en) * 2019-07-12 2019-10-15 东北大学 A Long-term Target Tracking Method Based on TLD Framework
CN111091583A (en) * 2019-11-22 2020-05-01 中国科学技术大学 Long-term target tracking method
CN111508002A (en) * 2020-04-20 2020-08-07 北京理工大学 A small low-flying target visual detection and tracking system and method thereof
CN113327272A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Robustness long-time tracking method based on correlation filtering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578034A (en) * 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 Control method, control device and system for carrying out tracking shooting for object

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335293A (en) * 2019-07-12 2019-10-15 东北大学 A Long-term Target Tracking Method Based on TLD Framework
CN111091583A (en) * 2019-11-22 2020-05-01 中国科学技术大学 Long-term target tracking method
CN111508002A (en) * 2020-04-20 2020-08-07 北京理工大学 A small low-flying target visual detection and tracking system and method thereof
CN113327272A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Robustness long-time tracking method based on correlation filtering

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于特征融合的长时目标跟踪算法;葛宝义等;《光学学报》;20180627;第38卷(第11期);第1-13页 *
时空上下文学习长时目标跟踪;刘威等;《光学学报》;20160110;第36卷(第01期);第1-8页 *
自适应目标变化的时空上下文抗遮挡跟踪算法;张晶等;《计算机工程与科学》;20180915;第40卷(第09期);第1653-1661页 *

Also Published As

Publication number Publication date
CN114241008A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN114241008B (en) A Long-term Region Tracking Method Adapting to Scene and Object Variations
CN111127518B (en) Target tracking method and device based on unmanned aerial vehicle
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
CN102456225B (en) Video monitoring system and moving target detecting and tracking method thereof
CN107255468B (en) Method for tracking target, target following equipment and computer storage medium
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN115131420B (en) Visual SLAM method and device based on keyframe optimization
CN113379801B (en) High-altitude parabolic monitoring and positioning method based on machine vision
CN110570453B (en) A visual odometry method for closed-loop feature tracking based on binocular vision
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
WO2017185503A1 (en) Target tracking method and apparatus
CN103079037B (en) Self-adaptive electronic image stabilization method based on long-range view and close-range view switching
CN106683121A (en) Robust object tracking method in fusion detection process
US11195297B2 (en) Method and system for visual localization based on dual dome cameras
CN110021034A (en) A kind of tracking recording broadcasting method and system based on head and shoulder detection
CN117036404B (en) A monocular thermal imaging simultaneous positioning and mapping method and system
CN111322993A (en) Visual positioning method and device
JP2020149642A (en) Object tracking device and object tracking method
CN111091582A (en) Single-vision target tracking algorithm and system based on deep neural network
CN110033472A (en) A kind of stable objects tracking under the infrared ground environment of complexity
CN116645396A (en) Track determination method, track determination device, computer-readable storage medium and electronic device
CN109978908B (en) A Single Target Fast Tracking and Localization Method Adapting to Large-Scale Deformation
CN113240749B (en) A long-distance dual target determination and ranging method for UAV recovery on offshore ship platforms
CN114708300A (en) Anti-blocking self-adaptive target tracking method and system
CN110009663A (en) A target tracking method, apparatus, device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant