CN107844739A - Robustness target tracking method based on adaptive rarefaction representation simultaneously - Google Patents

Robustness target tracking method based on adaptive rarefaction representation simultaneously Download PDF

Info

Publication number
CN107844739A
CN107844739A CN201710625586.4A CN201710625586A CN107844739A CN 107844739 A CN107844739 A CN 107844739A CN 201710625586 A CN201710625586 A CN 201710625586A CN 107844739 A CN107844739 A CN 107844739A
Authority
CN
China
Prior art keywords
tracking
template
target
model
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710625586.4A
Other languages
Chinese (zh)
Other versions
CN107844739B (en
Inventor
樊庆宇
李厚彪
羊恺
王梦云
陈鑫
李滚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710625586.4A priority Critical patent/CN107844739B/en
Publication of CN107844739A publication Critical patent/CN107844739A/en
Application granted granted Critical
Publication of CN107844739B publication Critical patent/CN107844739B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The robustness target tracking method based on adaptive rarefaction representation simultaneously of the present invention, comprises the following steps:S1, the size according to Laplacian noise energy, adaptive foundation while sparse tracing model;S2, the tracing model established is solved;S3, template is updated.Method for tracing track identification effect provided by the invention is good, and anti-interference is stronger, can realize that more accurate and real-time target tracking, method for tracing are relatively stable.

Description

基于自适应同时稀疏表示的鲁棒性目标追踪方法Robust Object Tracking Method Based on Adaptive Simultaneous Sparse Representation

技术领域technical field

本发明属于计算机图像处理技术领域,涉及一种目标跟踪方法,更为具体地说,涉及一种基于自适应同时稀疏表示的鲁棒性目标追踪方法。The invention belongs to the technical field of computer image processing and relates to a target tracking method, in particular to a robust target tracking method based on adaptive simultaneous sparse representation.

背景技术Background technique

目标追踪在计算机视觉领域占据着重要的地位,随着高质量电脑和摄像机的使用以及视频自动分析的需要使得人们对目标追踪产生浓厚兴趣。目标追踪的主要任务包括:感兴趣运动目标的检测、视频帧到帧之间的连续追踪和追踪目标的行为分析。当前,目标追踪的有关应用包括:运动识别、视频检索、人机交互、交通监控、车载导航等。Object tracking occupies an important position in the field of computer vision. With the use of high-quality computers and cameras and the need for automatic video analysis, people are interested in object tracking. The main tasks of target tracking include: detection of moving targets of interest, continuous tracking between video frames and behavior analysis of tracked targets. Currently, related applications of object tracking include: motion recognition, video retrieval, human-computer interaction, traffic monitoring, vehicle navigation, etc.

目前,尽管有很多追踪算法已被提出,但是目标追踪技术仍然面临着很多挑战。在实际追踪过程中常常遇到视频噪声污染,比如目标姿势改变、光照变化、背景混杂、部分遮挡和完全遮挡等问题,这些问题往往会导致目标追踪失败(跟踪漂移)。尤其是长时间遮挡对目标追踪的影响更是灾难性的,光照变化取决于目标所处的环境,光照变化越大对跟踪效果影响也就越大,背景混杂和目标姿势改变会影响跟踪的精确性,因此也会导致跟踪漂移,视频噪声污染会使得跟踪误差累积,最终会造成跟踪失败。At present, although many tracking algorithms have been proposed, the target tracking technology still faces many challenges. In the actual tracking process, video noise pollution is often encountered, such as target pose changes, lighting changes, background clutter, partial occlusion and complete occlusion, etc. These problems often lead to target tracking failure (tracking drift). In particular, the impact of long-term occlusion on target tracking is catastrophic. Illumination changes depend on the environment in which the target is located. The greater the illumination change, the greater the impact on the tracking effect. Background clutter and target pose changes will affect the accuracy of tracking. Therefore, it will also lead to tracking drift, video noise pollution will make tracking errors accumulate, and eventually cause tracking failure.

对于目标追踪,解决目标外形变化是一个基本而又挑战性的问题,目标外形变化可分为内在外形变化和外在外形变化。姿势改变是一种内在外形变化,而光照变化、背景混杂、部分遮挡和完全遮挡属于外在外形变化。要解决这些外观变化就需要一个自适应的追踪方法,即在线学习方法。目前在线学习方法大致可分为两类,分别为生成方法(Generative Approaches)和判别方法(Discriminative Approaches)。生成方法(GA)是一种搜索与追踪目标最相似的区域的方法,判别方法(DA) 可以看作是一种二分类问题,其主要目的是利用已知的训练样本训练出一个分类器,用于判别目标和背景。判别方法和生成方法尽管在一定程度中实现了可靠追踪但也有各自的缺点,一是判别方法对特征提取要求较高因此在实际跟踪过程中对噪声比较灵敏,对于噪声较大的目标可能会出现跟踪失败,而生成方法在混杂的背景下不能准确寻找与目标相似的区域,因此容易产生跟踪失败;二是判别方法需要足够的训练样本集,好的样本能够提升分类器的性能,坏的样本会削弱分类器的性能,若将坏的样本引入分类器就会影响跟踪效果,而生成方法对模板比较敏感,一旦错把遮挡的目标引入模板就可能出现跟踪失败,因此两种方法在现实场景跟踪中都不具有足够的鲁棒性。For target tracking, it is a basic and challenging problem to solve the target shape change. The target shape change can be divided into internal shape change and external shape change. Posture change is an internal shape change, while lighting changes, background clutter, partial occlusion, and full occlusion are external shape changes. Addressing these appearance changes requires an adaptive tracking method, namely an online learning method. At present, online learning methods can be roughly divided into two categories, namely Generative Approaches and Discriminative Approaches. The generation method (GA) is a method of searching for the region most similar to the tracking target, and the discriminant method (DA) can be regarded as a binary classification problem, the main purpose of which is to use known training samples to train a classifier. Used to distinguish objects and backgrounds. Although the discriminant method and the generative method have achieved reliable tracking to a certain extent, they also have their own shortcomings. First, the discriminant method has high requirements for feature extraction, so it is sensitive to noise in the actual tracking process, and it may appear for noisy targets. Tracking fails, and the generation method cannot accurately find areas similar to the target in the mixed background, so it is prone to tracking failure; second, the discriminant method needs sufficient training sample sets, good samples can improve the performance of the classifier, and bad samples It will weaken the performance of the classifier. If bad samples are introduced into the classifier, the tracking effect will be affected, and the generation method is more sensitive to the template. Once the occluded target is introduced into the template by mistake, tracking failure may occur. Therefore, the two methods are in the real scene. are not sufficiently robust in tracking.

发明内容Contents of the invention

本发明的目的在于,解决上述问题,提供一种能够应对视频中目标对象光照变化、尺度变化、遮挡、变形、运动模糊、快速运动、旋转、背景杂波、低分辨率等各种挑战,能够对目标对象进行连续、准确追踪的鲁棒性目标的追踪方法。The purpose of the present invention is to solve the above problems and provide a method that can cope with various challenges such as illumination changes, scale changes, occlusion, deformation, motion blur, fast motion, rotation, background clutter, and low resolution of the target object in the video. Robust target tracking method for continuous and accurate tracking of target objects.

本发明的目的是这样实现的,基于自适应同时稀疏表示的鲁棒性目标追踪方法,包括以下步骤:The object of the present invention is achieved in this way, based on the robust target tracking method of adaptive simultaneous sparse representation, comprising the following steps:

S1、根据拉普拉斯噪声能量的大小,自适应的建立同时稀疏追踪模型;S1. According to the size of the Laplacian noise energy, adaptively establish a simultaneous sparse tracking model;

S2、对所建立的追踪模型进行求解;S2, solving the established tracking model;

S3、对模板进行更新。S3. The template is updated.

进一步的,所述步骤S1的执行方法为:对比拉普拉斯均值噪声 ||S||2与给定噪声能量阈值τ的大小,并依据对比结果自适应的建立同时稀疏追踪模型:Further, the execution method of the step S1 is: comparing the Laplacian mean noise ||S|| 2 with the size of the given noise energy threshold τ, and adaptively establishing a simultaneous sparse tracking model according to the comparison result:

当||S||2≤τ时,同时稀疏追踪模型为:When ||S|| 2 ≤τ, the simultaneous sparse tracking model is:

当||S||2>τ时,同时稀疏追踪模型为:When ||S|| 2 >τ, the sparse tracking model is:

其中,D为追踪模板,表示为D=[T,I],T为目标模板,Among them, D is the tracking template, expressed as D=[T, I], T is the target template,

T=[T1,T2,…,Tn]∈Rd×n(d>>n),,d表示图像的维数,n表示模板基向量的个数, T的每一列都是通过零均值化后的向量D表示追踪模板,Y为候选目标集,λ1和λ2为模型的正则参数,X为稀疏系数,S为拉普拉斯噪声,为稀疏模型的保真项,||X||1,1为系数矩阵,||S||1,1表征拉普拉斯噪声能量。T=[T 1 ,T 2 ,…,T n ]∈R d×n (d>>n), where d represents the dimension of the image, n represents the number of template base vectors, and each column of T is passed The zero-meanized vector D represents the tracking template, Y is the candidate target set, λ 1 and λ 2 are the regular parameters of the model, X is the sparse coefficient, S is the Laplacian noise, is the fidelity item of the sparse model, ||X|| 1,1 is the coefficient matrix, and ||S|| 1,1 represents the Laplacian noise energy.

进一步的,所述步骤S2包括:Further, the step S2 includes:

S21、计算追踪目标yt与模板T均值的相似性,记为sim;S21. Calculate the similarity between the tracking target y t and the mean value of the template T, denoted as sim;

S22、判断sim与余弦夹角阈值α的大小,并自适应地选择模型进行追踪:S22. Determine the size of the angle threshold α between sim and cosine, and adaptively select a model for tracking:

当sim<α时,采用上式(1)的追踪模型,并通过交替方向乘子(ADMM)方法进行求解并获得稀疏系数X;When sim<α, the tracking model of the above formula (1) is adopted, and the solution is solved by the alternating direction multiplier (ADMM) method to obtain the sparse coefficient X;

当sim≥α时,采用上式(2)的追踪模型,并通过交替方向乘子 (ADMM)方法进行求解并获得稀疏系数X和拉普拉斯噪声S。When sim≥α, the tracking model of the above formula (2) is adopted, and the method of alternating direction multiplier (ADMM) is used to solve it and obtain the sparse coefficient X and Laplacian noise S.

进一步的,所述步骤S21中根据以下公式计算追踪目标与模板均值的相似性:Further, in the step S21, the similarity between the tracking target and the template mean value is calculated according to the following formula:

其中,c为模板T的均值,y为已跟踪到的目标,则||τ||2可以等价为目标与模板均值的反余弦,即余弦夹角。Among them, c is the mean value of the template T, and y is the tracked target, then ||τ|| 2 can be equivalent to the arccosine of the mean value of the target and the template, that is, the cosine angle.

进一步的,当||s||2≤τ时,对模板T进行更新。Further, when ||s|| 2 ≤τ, the template T is updated.

进一步的,所述模板更新方法包括:Further, the template update method includes:

S31、分别对当前模板T和已追踪到的目标y进行奇异值分解如下:S31. Perform singular value decomposition on the current template T and the tracked target y respectively as follows:

S32、使用奇异向量u,s,v去增量更新U,S,V,从而得到新的奇异向量U*,S*,V*,则新的模板表示为:S32. Use singular vectors u, s, v to incrementally update U, S, V, thereby obtaining new singular vectors U * , S * , V * , then the new template is expressed as:

T*=U*S*V* T (5)T * =U * S * V * T (5)

进一步的,还包括步骤S33:使用无监督学习K-means方法训练模板,给定初始类的个数为k,则Further, step S33 is also included: using the unsupervised learning K-means method to train templates, given that the number of initial classes is k, then

其中,i表示第i个样本,当属于类k的时候,则rik=1,否则 rik=0,uk为所有属于类k的样本的平均值,J表示样本点到该类样本点均值的距离和;通过式(6)则可得到第k类所有样本的平均值,降低了原模板维数;由此,更新得到的模板为:Among them, i represents the i-th sample, when When it belongs to class k, then r ik =1, otherwise r ik =0, u k is the average value of all samples belonging to class k, and J represents the sum of distances from sample points to the average value of sample points of this class; through formula (6) Then the average value of all samples of the kth class can be obtained, which reduces the dimension of the original template; thus, the updated template is:

Tnew=[u1,u2,…,uk] (7)T new =[u 1 ,u 2 ,…,u k ] (7)

本发明还提供一种自适应稀疏表示的快速目标追踪方法,包括以下步骤:The present invention also provides a fast target tracking method for adaptive sparse representation, comprising the following steps:

A、建立目标追踪模型:A. Establish a target tracking model:

B、使用交替迭代的方法,获得最优的追踪目标;B. Use the alternate iteration method to obtain the optimal tracking target;

C、更新目标模板T,并得到新的模板T*C. Update the target template T and obtain a new template T * ;

D、返回最好的追踪目标和目标模板集合,继续下一帧的目标追踪。D. Return the best tracking target and target template set, and continue the target tracking of the next frame.

优选的,所述步骤C包括:Preferably, said step C comprises:

C1、计算追踪目标与模板均值的相似性,相似性记为sim;C1. Calculate the similarity between the tracking target and the mean value of the template, and record the similarity as sim;

C2、将相似性sim和余弦夹角阈值α,β进行比较,如果sim<α,执行步骤C3继续更新模板,如果α≤sim≤β,那么y←m,执行步骤C3 更新模板,如果sim>β,此时追踪目标与模板基本不相似,表示目标遭受严重的噪声污染,不更新模板;C2. Compare the similarity sim with the cosine angle threshold α, β. If sim<α, execute step C3 to continue updating the template. If α≤sim≤β, then y←m, execute step C3 to update the template. If sim> β, at this time, the tracking target is basically not similar to the template, indicating that the target suffers from serious noise pollution, and the template is not updated;

C3、对模板T进行奇异值分解得到T=U∑VT,增量更新模板T的左奇异向量U并得到新的奇异向量U*,结合式(5)、(6),计算得到新的模板 C3. Perform singular value decomposition on the template T to obtain T=U∑V T , incrementally update the left singular vector U of the template T and obtain a new singular vector U * , combine formulas (5) and (6), and calculate new template

相比现有技术,本发明的有益效果体现在:Compared with the prior art, the beneficial effects of the present invention are reflected in:

(1)本发明在稀疏表示的框架下提出了新的模板选择和更新方法,该方法更加强调模板的实时更新,同时由于实时更新的是模板的左奇异向量,更新引入的误差能被控制的很低,相对于对目标进行噪声剔除然后再引入新的目标模板,本发明的模板更新中还引入了 K-means技术,这降低了模板的维度,有效地降低冗余模板向量,提高追踪的实时性,并且减弱了噪声的影响。而传统的模板更新方法则容易引入了较大的噪声误差,给下一帧目标的追踪造成了很多不确定性;(1) The present invention proposes a new template selection and update method under the framework of sparse representation. This method places more emphasis on the real-time update of the template. At the same time, since the real-time update is the left singular vector of the template, the error introduced by the update can be controlled. It is very low. Compared with removing noise from the target and then introducing a new target template, K-means technology is also introduced in the template update of the present invention, which reduces the dimension of the template, effectively reduces redundant template vectors, and improves tracking performance. real-time, and reduce the impact of noise. However, the traditional template update method is easy to introduce a large noise error, which causes a lot of uncertainty to the tracking of the target in the next frame;

(2)本发明的目标追踪模型中充分考虑了高斯噪声和拉普拉斯噪声的影响,根据拉普拉斯噪声能量的大小自适应的选择模型的正则项,这不仅提高了追踪的精度,也提高了追踪的实时性;(2) fully consider the influence of Gaussian noise and Laplacian noise in the target tracking model of the present invention, according to the regular item of the adaptive selection model of the size of Laplacian noise energy, this not only improves the precision of tracking, It also improves the real-time performance of tracking;

(3)本发明将ADMM算法结合到追踪模型的求解中,通过对正则项参数的控制使得模型的求解更加稳定。(3) The present invention combines the ADMM algorithm into the solution of the tracking model, and makes the solution of the model more stable by controlling the parameters of the regular term.

具体实施方式Detailed ways

下面将结合具体实施例对本发明的基于自适应同时稀疏表示的鲁棒性目标追踪方法做出进一步的阐述说明。The robust target tracking method based on adaptive simultaneous sparse representation of the present invention will be further described below in conjunction with specific embodiments.

基于自适应同时稀疏表示的鲁棒性目标追踪方法,包括以下步骤:A robust target tracking method based on adaptive simultaneous sparse representation, including the following steps:

S1、根据拉普拉斯噪声能量的大小,自适应的建立同时稀疏追踪模型S1. According to the size of the Laplacian noise energy, adaptively establish a simultaneous sparse tracking model

对比拉普拉斯均值噪声||S||2与给定噪声能量阈值τ的大小,并依据对比结果自适应的建立同时稀疏追踪模型:Compare the Laplacian mean noise ||S|| 2 with the given noise energy threshold τ, and adaptively establish a simultaneous sparse tracking model based on the comparison results:

当||S||2≤τ时,同时稀疏追踪模型为:When ||S|| 2 ≤τ, the simultaneous sparse tracking model is:

当||S||2>τ时,同时稀疏追踪模型为:When ||S|| 2 >τ, the sparse tracking model is:

其中,定义D=[T,I]表示追踪模板,I表示琐碎模板,给定目标模板的图像集合Among them, the definition D=[T,I] represents the tracking template, I represents the trivial template, and the image set of the given target template

T=[T1,T2,…,Tn]∈Rd×n(d>>n),,T=[T 1 ,T 2 ,...,T n ]∈R d×n (d>>n),,

d表示图像的维数,n表示模板基向量的个数,T的每一列都是通过零均值化后的向量。其中,Y=[y1,y2,…,ym],表示所有侯选目标,假设噪声服从高斯拉普拉斯分布,则:d represents the dimension of the image, n represents the number of template base vectors, and each column of T is a vector after zero-meanization. Among them, Y=[y 1 ,y 2 ,…,y m ], representing all candidate targets, assuming that the noise obeys the Gaussian Laplace distribution, then:

Y=TZ+S+E (9)Y=TZ+S+E (9)

S表示拉普拉斯噪声,E表示高斯噪声,X为稀疏系数,λ1和λ2为模型的正则参数,为稀疏模型的保真项,考虑拉普拉斯噪声之后模型有此变形,||X||1,1为系数矩阵,能够更好提取粒子之间相似性并有效去除模板冗余信息,||S||1,1表征拉普拉斯噪声能量。S means Laplacian noise, E means Gaussian noise, X is sparse coefficient, λ 1 and λ 2 are regular parameters of the model, is the fidelity item of the sparse model, and the model has this deformation after considering the Laplacian noise, ||X|| 1,1 is the coefficient matrix, which can better extract the similarity between particles and effectively remove the redundant information of the template,| |S|| 1,1 characterizes Laplace noise energy.

尽管现有的稀疏表示目标追踪算法在一定程度上解决了部分遮挡,光照变化,姿势改变和背景混杂等影响,但模型都是基于噪声服从高斯分布情况下建立的,太过简单地考虑噪声分布情况,因此面对一些复杂的噪声分布情况时可能会出现跟踪失败。本发明的目标追踪模型中充分考虑了高斯噪声和拉普拉斯噪声的影响,根据拉普拉斯噪声能量的大小自适应的选择模型的正则项,这不仅提高了追踪的精度,也提高了追踪的实时性。Although the existing sparse representation target tracking algorithms can solve the effects of partial occlusion, illumination changes, pose changes and background clutter to a certain extent, the models are all based on the noise obeying the Gaussian distribution, and the noise distribution is considered too simply. situation, so tracking failure may occur in the face of some complex noise distribution situations. In the target tracking model of the present invention, the influence of Gaussian noise and Laplacian noise is fully considered, and the regular term of the selection model is adaptively selected according to the size of Laplacian noise energy, which not only improves the accuracy of tracking, but also improves the real-time tracking.

S2、对所建立的追踪模型进行求解S2. Solve the established tracking model

优选的,所述步骤S2包括:Preferably, said step S2 includes:

S21、计算追踪目标yt与模板均值的相似性,记为sim;S21. Calculate the similarity between the tracking target y t and the mean value of the template, denoted as sim;

作为优选的,根据以下公式计算追踪目标与模板均值的相似性:Preferably, the similarity between the tracking target and the template mean is calculated according to the following formula:

S22、判断sim与余弦夹角阈值α的大小,并自适应地选择模型进行追踪:S22. Determine the size of the angle threshold α between sim and cosine, and adaptively select a model for tracking:

当sim<α时,采用上式(1)的追踪模型,通过交替方向乘子(ADMM) 方法进行求解并获得稀疏系数X;When sim<α, the tracking model of the above formula (1) is used to solve it through the Alternating Direction Multiplier (ADMM) method and obtain the sparse coefficient X;

当sim≥α时,采用上式(2)的追踪模型,通过交替方向乘子(ADMM) 方法进行求解并获得稀疏系数X和噪声S。When sim≥α, the tracking model of the above formula (2) is used to solve it by the Alternating Direction Multiplier (ADMM) method and obtain the sparse coefficient X and noise S.

为更清楚的对所建立的追踪模型的求解方案进行说明,下面举例说明上式(2)的求解思路:In order to explain the solution scheme of the established tracking model more clearly, the following example illustrates the solution idea of the above formula (2):

上式(1)和(2)的目标函数是一个凸优化问题,因此可以使用无约束优化方法对目标函数进行求解,交替方向乘子方法(ADMM)是一种无约束求解的经典方法,它具有求解稳定和收敛速度快等优点,使用 ADMM方法求解优化问题(2)如下:The objective function of the above formulas (1) and (2) is a convex optimization problem, so an unconstrained optimization method can be used to solve the objective function. Alternating direction multiplier method (ADMM) is a classical method for unconstrained solution. With the advantages of stable solution and fast convergence speed, the ADMM method is used to solve the optimization problem (2) as follows:

首先,将约束问题变为无约束问题为First, transform the constrained problem into an unconstrained problem as

其中是一个指示函数(xi表示X的第i行,并且如果xi非负,则τ+(xi)等于0;否则τ+(xi)等于+∞)。因此,优化问题(2)有如下的等价形式:in is an indicator function ( xi represents the i-th row of X, and if x i is non-negative, then τ + ( xi ) is equal to 0; otherwise τ + ( xi ) is equal to +∞). Therefore, the optimization problem (2) has the following equivalent form:

其中V1,V2,V3为对偶变量,公式(11)进一步优化为Among them, V 1 , V 2 , and V 3 are dual variables, and the formula (11) is further optimized as

这里, here,

公式(11)的增广拉格朗日函数为The augmented Lagrange function of formula (11) is

其中β表示拉格朗日乘子,U=[U1,U2,U3]T.Where β represents the Lagrange multiplier, U=[U 1 ,U 2 ,U 3 ] T .

公式(15)可以分解为三个子优化问题,分别为X的子问题,S的子问题以及V的子问题。下面分别对这些子优化问题求解如下:Formula (15) can be decomposed into three sub-optimization problems, which are the sub-problems of X, the sub-problems of S and the sub-problems of V. The following sub-optimization problems are solved as follows:

因此,根据极值原理,只需要对上述子问题求一阶导数,可获得方程(15)的最优解如下:Therefore, according to the extreme value principle, only the first derivative of the above sub-problem needs to be calculated, and the optimal solution of equation (15) can be obtained as follows:

V1*=[β(TX-U1)+(Y-S)]/(1+β)V 1* =[β(TX−U 1 )+(YS)]/(1+β)

V2*=shrink(X-U21/β)V 2* =shrink(XU 21 /β)

V3*=max(0,X-U3)V 3* =max(0,XU 3 )

S*=shrink(Y-V12)S * =shrink(YV 12 )

X*=(TTT+2I)-1[TT(V1+U1)+V2+U2+V3+U3]X * =(T T T+2I) -1 [T T (V 1 +U 1 )+V 2 +U 2 +V 3 +U 3 ]

其中shrink是一个紧缩算子,即对于一个非负向量p,则有where shrink is a shrinkage operator, that is, for a non-negative vector p, there is

shrinkI(x,p)=sgn(x)ο max{|x|-p,0}.shrinkI(x,p)=sgn(x)ο max{|x|-p,0}.

同理,对于上式(1)依然可以使用ADMM方法进行求解,并且得到以下解的形式:Similarly, the ADMM method can still be used to solve the above formula (1), and the following solution can be obtained:

V1*=[β(TX-U1)+Y]/(1+β)V 1* =[β(TX−U 1 )+Y]/(1+β)

V2*=shrink(X-U21/β)V 2 *=shrink(XU 21 /β)

V3*=max(0,X-U3)V 3 *=max(0,XU 3 )

X*=(TTT+2I)-1[TT(V1+U1)+V2+U2+V3+U3]X * =(T T T+2I) -1 [T T (V 1 +U 1 )+V 2 +U 2 +V 3 +U 3 ]

这样通过对子问题的分析与求解,获得了式(1)和(2)解的一般形式。如果输入给定的拉格朗日乘子β,通过交替方向迭代可以得到式 (1)和(2)的最优数值解,其中,表1为式(1)的ADMM求解算法过程,表2为式(2)的ADMM求解算法过程。In this way, through the analysis and solution of the sub-problems, the general forms of the solutions of formulas (1) and (2) are obtained. If a given Lagrangian multiplier β is input, the optimal numerical solutions of equations (1) and (2) can be obtained by iterating in alternate directions, where Table 1 shows the ADMM solution algorithm process of equation (1), and Table 2 It is the ADMM solution algorithm process of formula (2).

表1Table 1

表2Table 2

S3、对模板进行更新S3, update the template

作为一种优选的方案,当||s||2≤τ时,对模板T进行更新。As a preferred solution, when ||s|| 2 ≤τ, the template T is updated.

进一步的,所述模板更新方法包括:Further, the template update method includes:

S31、分别对当前模板T和已追踪到的目标y进行奇异值分解如S31. Singular value decomposition is performed on the current template T and the tracked target y respectively as follows:

下:Down:

T=USVT T = USV T

y=usvT (4)y=usv T (4)

S32、使用奇异向量u,s,v,去增量更新U,S,V,从而得到新的奇异向量U*,S*,V*,则新的模板表示为:S32. Use singular vectors u, s, v to incrementally update U, S, V, thereby obtaining new singular vectors U*, S*, V*, then the new template is expressed as:

T*=U*S*V* T (5)T * =U * S * V * T (5)

优选的,还包括步骤S33:使用无监督学习K-means方法训练模板,给定初始类的个数为k,则K-means学习方法如下:Preferably, step S33 is also included: using the unsupervised learning K-means method to train templates, given that the number of initial classes is k, the K-means learning method is as follows:

其中i表示第i个样本,当属于类k的时候,则rik=1,否则rik=0, uk为所有属于类k的样本的平均值,J表示样本点到该类样本点均值的距离和;通过式(6)则可得到第k类所有样本的平均值,降低了原模板维数。由此,更新得到的模板为:where i represents the i-th sample, when When it belongs to class k, then r ik =1, otherwise r ik =0, u k is the average value of all samples belonging to class k, and J represents the sum of distances from sample points to the average value of sample points of this class; through formula (6) Then the average value of all samples of the kth class can be obtained, which reduces the dimension of the original template. Thus, the updated template is:

Tnew=[u1,u2,…,uk] (7)T new =[u 1 ,u 2 ,…,u k ] (7)

本发明提供的模板更新方法对于遮挡和光照变化表现出了更强的鲁棒性,该方法不同于传统的模板更新,它强调的是选择对目标追踪具有重要贡献的模板,而避免使用琐碎模板,并且通过K-means算法对模板进行无监督训练,大大剔除了模板的冗余信息,从而提高了追踪的实时性。The template update method provided by the present invention is more robust to occlusion and illumination changes. This method is different from the traditional template update. It emphasizes the selection of templates that have important contributions to target tracking and avoids the use of trivial templates. , and the K-means algorithm is used to perform unsupervised training on the template, which greatly eliminates the redundant information of the template, thereby improving the real-time performance of tracking.

如表3所示为本发明实施例的自适应同时稀疏表示的目标追踪算法过程:As shown in Table 3, it is the target tracking algorithm process of the adaptive simultaneous sparse representation of the embodiment of the present invention:

表3table 3

下面,将通过试验把本发明提供的目标追踪方法(Ours)与其它五种现有的具有很好追踪性能的方法进行比较,这五种追踪算法分别为核技巧的循环矩阵追踪(CSK),加速对偶梯度追踪(L1APG),多任务追踪(MTT),稀疏原型追踪(SPT)以及稀疏联合追踪(SCM)。以下实验均是基于Matlab 2012a,计算机内存2GB,CPU为Intel(R)Core(TM)i3 的平台上进行。Below, will compare the target tracking method (Ours) provided by the present invention with other five kinds of existing methods with good tracking performance by experiment, these five kinds of tracking algorithms are respectively the circular matrix tracking (CSK) of kernel skill, Accelerated Dual Gradient Pursuit (L1APG), Multi-Task Tracking (MTT), Sparse Prototype Tracking (SPT) and Sparse Joint Tracking (SCM). The following experiments are all based on Matlab 2012a, the computer memory is 2GB, and the CPU is Intel(R) Core(TM) i3 platform.

数据和实验说明:Data and Experiment Description:

实验选择了14种不同的具有追踪挑战性的视频,其中包括遮挡,光照变化,背景混杂,姿势改变,低分辨率和快速运动等影响追踪结果的因素,视频序列属性简介见表4。表4中视频包含了不同的噪声,其中OV表示目标丢失,BC表示背景混杂,OCC表示完全遮挡,OCP表示部分遮挡,OPR表示旋转出平面,LR表示低分辨率,FM表示快速运动,SV表示大小变化。本发明实施例的实验采用的评价方法有三种,且每种评价方法都能在一定程度上解释追踪性能的好坏,分别为局部中心误差(Center Local Error),重叠率(Overlap Ratio)和曲线下的面积(Area Under Curve)。给定帧的真实目标框Rg(ground truth)和追踪目标框Rt(tracked target bounding),不妨设它们的中心位置分别为:pg=(xg,yg)和pt=(xt,yt),则局部中心误差为 CLE=||pg-pt||2,重叠率为The experiment selected 14 different tracking-challenging videos, including occlusion, lighting changes, background clutter, pose changes, low resolution and fast motion, and other factors that affect the tracking results. The profile of the video sequence attributes is shown in Table 4. The video in Table 4 contains different noises, where OV means target loss, BC means background clutter, OCC means complete occlusion, OCP means partial occlusion, OPR means rotate out of plane, LR means low resolution, FM means fast motion, SV means Size changes. There are three evaluation methods used in the experiment of the embodiment of the present invention, and each evaluation method can explain the quality of the tracking performance to a certain extent, which are respectively Center Local Error (Center Local Error), Overlap Ratio (Overlap Ratio) and Curve Area Under Curve. Given the real target frame R g (ground truth) and the tracked target frame R t (tracked target bounding) of the frame, we might as well set their center positions as: p g =(x g , y g ) and p t =(x t ,y t ), then the local center error is CLE=||p g -p t || 2 , and the overlap rate

area(·)表示在该区域的所有像素,面积曲线(AUC)每一点的值表示重叠率大于给定阈值η时该视频追踪的成功率。特别地,我们设定η=0.5,当重叠率OR>0.5时则认为追踪该帧成功。相关追踪结果,见表4、5、 6所示。area(·) represents all the pixels in the area, and the value of each point of the area curve (AUC) represents the success rate of the video tracking when the overlap rate is greater than a given threshold η. In particular, we set η=0.5, and when the overlap ratio OR>0.5, it is considered that the tracking of the frame is successful. The relevant tracking results are shown in Tables 4, 5, and 6.

表4Table 4

视频序列video sequence 帧数number of frames 噪声(s)noise(s) 视频序列video sequence 帧数number of frames 噪声(s)noise(s) Walking2Walking2 495495 SV,OCP,LRSV,OCP,LR Suvsuv 945945 OCC,OV,BCOCC,OV,BC Car4Car4 659659 IV,SVIV, SV CarDarkCarDark 393393 IV,BC,LRIV,BC,LR Car2Car2 913913 IV,SV,BCIV, SV, BC DeerDeer 7171 FM,LR,BCFM,LR,BC Girlgirl 500500 OPR,OCC,LROPR, OCC, LR Singr2Singr2 366366 IV,OPR,BCIV, OPR, BC FaceOcc2FaceOcc2 812812 OCC,OPR,IVOCC,OPR,IV Skater2Skater2 435435 SV,OPRSV,OPR FootballFootball 362362 OCC,OPR,BCOCC,OPR,BC DudekDudek 11451145 OCC,BC,OVOCC,BC,OV FaceOcc1FaceOcc1 892892 OCCOCC SubwaySubway 175175 OCC,BC OCC,BC

表5为基于平均重叠率的各种不同算法性能的对比数据,其中 AOR表示总的平均重叠率,其中,平均重叠率越大表示追踪性能越好;在实验中参数设置如下:正则参数λ1=0.1,λ2=0.1,惩罚因子β=0.1,余弦角度阈值最小为αmin=20,最大为αmax=35,模板最大基向量个数为15,粒子采样数为600,图像块的大小为25×25,实验最大迭代次数Loop=20,收敛误差tol=0.001。实验中参数λ12均是通过交叉验证方法得到,且λ2参数的调节满足如下规则,若拉普拉斯噪声S的能量较大(即目标遭受较大的遮挡,外形变化或光照变化),此时λ2的值应该较小,反之则较大。由表5的实验对比数据可见,本发明提供的追踪方法(Ours)无论从不同视频类别的平均重叠率还是总的平均重叠率均明显要高于其它追踪方法,即本发明的追踪方法取得了最好的追踪性能效果。Table 5 is the comparison data of various algorithm performances based on the average overlap rate, where AOR represents the total average overlap rate, wherein the larger the average overlap rate, the better the tracking performance; the parameters are set as follows in the experiment: regularization parameter λ 1 = 0.1, λ 2 = 0.1, penalty factor β = 0.1, the minimum cosine angle threshold is α min = 20, the maximum is α max = 35, the maximum number of template basis vectors is 15, the number of particle samples is 600, the size of the image block is 25×25, the maximum number of iterations in the experiment Loop=20, and the convergence error tol=0.001. In the experiment, the parameters λ 1 and λ 2 are obtained through the cross-validation method, and the adjustment of the λ 2 parameter satisfies the following rules. change), at this time the value of λ 2 should be smaller, and vice versa. As can be seen from the experimental comparison data in Table 5, the tracking method (Ours) provided by the present invention is obviously higher than other tracking methods no matter from the average overlap rate of different video categories or the total average overlap rate, that is, the tracking method of the present invention has achieved The best tracking performance results.

表5table 5

如表6所示为基于平均局部中心误差的各种不同算法性能的对比,其中ACLE表示总的平均中心误差,平均中心误差越小表示追踪性能越好。由表6的数据显而易见,本发明提供的追踪方法(Ours) 无论从不同视频类别的平均中心误差还是总的平均中心误差,均要明显低于其它追踪方法,即本发明的追踪方法取得了最优的最终性能效果。Table 6 shows the comparison of the performance of various algorithms based on the average local center error, where ACLE represents the total average center error, and the smaller the average center error, the better the tracking performance. It is obvious from the data in Table 6 that the tracking method (Ours) provided by the present invention is significantly lower than other tracking methods no matter from the average center error of different video categories or the total average center error, that is, the tracking method of the present invention has achieved the best results. Excellent final performance effect.

表6Table 6

如表7所示为基于平均成功率的各种不同算法性能的对比,其中, ASR表示总的平均成功率;As shown in Table 7, it is a comparison of the performance of various algorithms based on the average success rate, where ASR represents the overall average success rate;

表7Table 7

由表7中的对比数据可知,本发明提供的追踪方法(Ours)无论对不同视频类别的平均成功率还是从总的平均成功率来评价,其均要由于其它追踪方法。From the comparative data in Table 7, it can be known that the tracking method (Ours) provided by the present invention is evaluated by other tracking methods no matter the average success rate of different video categories or the overall average success rate.

为了更进一步地理解本发明提出的追踪算法,下面将说明模型中提到的拉普拉斯噪声和模板更新准则对追踪效果的具体影响。In order to further understand the tracking algorithm proposed by the present invention, the specific influence of the Laplacian noise mentioned in the model and the template update criterion on the tracking effect will be explained below.

传统的模板更新方法直接是通过追踪目标与模板的相似度来进行更新,若相似度大于给定的阈值,则认为目标遭遇了较大的噪声污染,因此需要将追踪目标替代原始权值较小的模板向量,这样替换其实是比较粗糙的,因为引入了较大的噪声误差,这样就给下一帧目标的追踪造成了很多不确定性,而本发明提出的新的模板更新方法则削弱了噪声影响。具体表现如下:The traditional template update method is directly updated by tracking the similarity between the target and the template. If the similarity is greater than a given threshold, it is considered that the target has encountered large noise pollution, so it is necessary to replace the tracking target with a smaller original weight. template vector, this replacement is actually relatively rough, because a large noise error is introduced, which causes a lot of uncertainty for the tracking of the target in the next frame, and the new template update method proposed by the present invention weakens the Noise effect. The specific performance is as follows:

(1)新的模板更新方法有效权衡原始模板向量和新的追踪目标之间的权重,通过遗忘因子实现模板更新;(1) The new template update method effectively weighs the weight between the original template vector and the new tracking target, and realizes template update through the forgetting factor;

(2)新的模板更新方法引入了K-means方法,这可以有效地降低冗余模板向量,提高追踪的实时性,并且类中心的计算是通过加权平均得到,因此也可以有效地减弱噪声。(2) The new template update method introduces the K-means method, which can effectively reduce redundant template vectors and improve the real-time performance of tracking, and the calculation of the class center is obtained by weighted average, so it can also effectively reduce noise.

下面给出具体实验分别比较模板更新和拉普拉斯对实验效果的影响,实验对象选择:MTT算法,ASSAT算法(仅拉普拉斯),ASSAT(仅模板更新),ASSAT(拉普拉斯+模板更新),实验数据选择序列Skater2, Dudek,SUV,Walking2,Subway,Deer等。The specific experiments are given below to compare the influence of template update and Laplacian on the experimental results, and the selection of experimental objects: MTT algorithm, ASSAT algorithm (only Laplacian), ASSAT (only template update), ASSAT (Laplace + template update), experimental data selection sequence Skater2, Dudek, SUV, Walking2, Subway, Deer, etc.

表8为拉普拉斯对实验结果的影响对比,从表8可以看出除了 Walking2序列,加入拉普拉斯噪声后其追踪效果要优于MTT算法,但是原始模板更新的方法限制了它的追踪性能,而提出的新的模板更新方法促进了ASSAT算法的追踪性能。Table 8 shows the comparison of the effect of Laplacian on the experimental results. It can be seen from Table 8 that except for the Walking2 sequence, the tracking effect of adding Laplacian noise is better than that of the MTT algorithm, but the original template update method limits its performance. tracking performance, and the proposed new template update method improves the tracking performance of the ASSAT algorithm.

表8Table 8

表9为不同模板更新对实验结果的影响对比,从表9中可以看出仅使用模板更新的ASSAT方法和IVT方法的追踪效果差不多,并没有多少提升,对于Skater2,Subway序列两种方法效果都不好,原因是这两种序列含有较大的遮挡,对于仅考虑模板更新而没有考虑拉普拉斯噪声的ASSAT算法是无法有效追踪到目标的,IVT也是一样。其实,对于这种含有较大遮挡的情况,若不考虑拉普拉斯噪声可以归结到噪声因素影响了公式(1)中解X的稀疏结构。Table 9 compares the impact of different template updates on the experimental results. From Table 9, it can be seen that the tracking effect of the ASSAT method and the IVT method using only template updates is similar, and there is not much improvement. For Skater2 and Subway sequence, the two methods have the same effect. Not good, the reason is that these two sequences contain large occlusions, and the ASSAT algorithm that only considers template updates without considering Laplacian noise cannot effectively track the target, and the same is true for IVT. In fact, for this kind of situation with large occlusion, if Laplacian noise is not considered, it can be attributed to the fact that the noise factor affects the sparse structure of the solution X in formula (1).

表9Table 9

目标在遮挡情况不同的噪声选择对解的影响还表现在,当考虑拉普拉斯噪声时所得到的解是稀疏的,此时解是最优的,而未考虑拉普拉斯噪声时所得到解是稠密的非最优解,因此保持解的稀疏结构直接影响算法的追踪性能。The impact of different noise selections on the solution of the target in the occlusion situation is also manifested in that the solution obtained when considering Laplacian noise is sparse, and the solution is optimal at this time, while the solution obtained when Laplacian noise is not considered The obtained solution is a dense non-optimal solution, so maintaining the sparse structure of the solution directly affects the tracking performance of the algorithm.

本发明还提供一种自适应稀疏表示的快速目标追踪方法,包括以下步骤:The present invention also provides a fast target tracking method for adaptive sparse representation, comprising the following steps:

A、建立目标追踪模型:A. Establish a target tracking model:

B、使用交替迭代的方法,获得最优的追踪目标;B. Use the alternate iteration method to obtain the optimal tracking target;

C、更新目标模板T,并得到新的模板T* C. Update the target template T and get a new template T *

D、返回最好的追踪目标和目标模板集合,继续下一帧的目标追踪。D. Return the best tracking target and target template set, and continue the target tracking of the next frame.

作为优选的,所述步骤C包括:As preferably, said step C comprises:

C1、计算追踪目标与模板均值的相似性,相似性记为sim;C1. Calculate the similarity between the tracking target and the mean value of the template, and record the similarity as sim;

C2、将相似性sim和余弦夹角阈值α,β进行比较,如果sim<α,执行步骤C3继续更新模板,如果α≤sim≤β,那么y←m,执行步骤C3 更新模板,如果sim>β,此时追踪目标与模板基本不相似,表示目标遭受严重的噪声污染,不更新模板;C2. Compare the similarity sim with the cosine angle threshold α, β. If sim<α, execute step C3 to continue updating the template. If α≤sim≤β, then y←m, execute step C3 to update the template. If sim> β, at this time, the tracking target is basically not similar to the template, indicating that the target suffers from serious noise pollution, and the template is not updated;

C3、对模板T进行奇异值分解得到T=U∑VT,增量更新模板T的左奇异向量U并得到新的奇异向量U*,结合式(5)、(6),计算得到新的模板 C3. Perform singular value decomposition on the template T to obtain T=U∑V T , incrementally update the left singular vector U of the template T and obtain a new singular vector U * , combine formulas (5) and (6), and calculate new template

尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications and substitutions can be made to these embodiments without departing from the principle and spirit of the present invention. and modifications, the scope of the invention is defined by the appended claims and their equivalents.

Claims (8)

1.基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,包括以下步骤:1. A robust target tracking method based on adaptive simultaneous sparse representation, comprising the following steps: S1、根据拉普拉斯噪声能量的大小,自适应的建立同时稀疏追踪模型;S1. According to the size of the Laplacian noise energy, adaptively establish a simultaneous sparse tracking model; S2、对所建立的追踪模型进行求解;S2, solving the established tracking model; S3、对模板进行更新。S3. The template is updated. 2.根据权利要求1所述的基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,所述步骤S1的执行方法为:对比拉普拉斯均值噪声||S||2与给定噪声能量阈值τ的大小,并依据对比结果自适应的建立同时稀疏追踪模型:2. The robust target tracking method based on adaptive simultaneous sparse representation according to claim 1, characterized in that, the execution method of the step S1 is: comparing Laplacian mean noise ||S|| 2 with The size of the noise energy threshold τ is given, and the simultaneous sparse tracking model is established adaptively according to the comparison results: 当||S||2≤τ时,同时稀疏追踪模型为:When ||S|| 2 ≤τ, the simultaneous sparse tracking model is: 当||S||2>τ时,同时稀疏追踪模型为:When ||S|| 2 >τ, the sparse tracking model is: 其中,D表示追踪模板,Y为候选目标集,λ1和λ2为模型的正则参数,X为稀疏系数,S为拉普拉斯噪声,为稀疏模型的保真项,||X||1,1为系数矩阵,||S||1,1表征拉普拉斯噪声能量。Among them, D represents the tracking template, Y is the candidate target set, λ 1 and λ 2 are the regular parameters of the model, X is the sparse coefficient, S is the Laplacian noise, is the fidelity item of the sparse model, ||X|| 1,1 is the coefficient matrix, and ||S|| 1,1 represents the Laplacian noise energy. 3.根据权利要求2所述的基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,所述追踪模板D表示为D=[T,I],其中,T为目标模板, T=[T1,T2,…,Tn]∈Rd×n(d>>n),,d表示图像的维数,n表示模板基向量的个数,T的每一列都是通过零均值化后的向量。3. The robust target tracking method based on adaptive simultaneous sparse representation according to claim 2, wherein the tracking template D is expressed as D=[T, I], where T is a target template, and T =[T 1 ,T 2 ,…,T n ]∈R d×n (d>>n), where d represents the dimension of the image, n represents the number of template basis vectors, each column of T is passed through zero Averaged vector. 4.根据权利要求3所述的基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,所述步骤S2包括:4. The robust target tracking method based on adaptive simultaneous sparse representation according to claim 3, wherein said step S2 comprises: S21、计算追踪目标yt与模板T均值的相似性,记为sim;S21. Calculate the similarity between the tracking target y t and the mean value of the template T, denoted as sim; S22、判断sim与余弦夹角阈值α的大小,并自适应地选择模型进行追踪:S22. Determine the size of the angle threshold α between sim and cosine, and adaptively select a model for tracking: 当sim<α时,采用上式(1)的追踪模型,通过交替方向乘子(ADMM)方法进行求解并获得稀疏系数X;When sim<α, the tracking model of the above formula (1) is used to solve it by the method of alternating direction multiplier (ADMM) and obtain the sparse coefficient X; 当sim≥α时,采用上式(2)的追踪模型,通过交替方向乘子(ADMM)方法进行求解并获得稀疏系数X和拉普拉斯噪声S。When sim≥α, the tracking model of the above formula (2) is used to solve it by the Alternating Direction Multiplier (ADMM) method and obtain the sparse coefficient X and Laplacian noise S. 5.根据权利要求4所述的基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,所述步骤S21中根据以下公式计算追踪目标与模板T均值的相似性:5. The robust target tracking method based on adaptive simultaneous sparse representation according to claim 4, characterized in that, in the step S21, the similarity between the tracking target and the template T mean value is calculated according to the following formula: 其中,c为模板T的均值,y为已跟踪到的目标,则||τ||2可以等价为目标与模板均值的反余弦,即余弦夹角。Among them, c is the mean value of the template T, and y is the tracked target, then ||τ|| 2 can be equivalent to the arccosine of the mean value of the target and the template, that is, the cosine angle. 6.根据权利要求1-5任一项所述的基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,当||s||2≤τ时,对模板T进行更新。6. The robust target tracking method based on adaptive simultaneous sparse representation according to any one of claims 1-5, wherein when ||s|| 2 ≤τ, the template T is updated. 7.根据权利要求6所述的基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,所述模板更新方法包括:7. the robust target tracking method based on adaptive simultaneous sparse representation according to claim 6, is characterized in that, described template update method comprises: S31、分别对当前模板T和已追踪到的目标y进行奇异值分解如下:S31. Perform singular value decomposition on the current template T and the tracked target y respectively as follows: S32、使用奇异向量u,s,v去增量更新U,S,V,从而得到新的奇异向量U*,S*,V*,则新的模板表示为:S32. Use the singular vectors u, s, v to incrementally update U, S, V, thereby obtaining new singular vectors U * , S * , V * , and the new template is expressed as: T*=U*S*V* T (5)T * =U * S * V * T (5) 8.根据权利要求7所述的基于自适应同时稀疏表示的鲁棒性目标追踪方法,其特征在于,还包括步骤S33:使用无监督学习K-means方法训练模板,给定初始类的个数为k,则8. The robust target tracking method based on adaptive simultaneous sparse representation according to claim 7, further comprising step S33: using the unsupervised learning K-means method to train templates, given the number of initial classes is k, then 其中,i表示第i个样本,当属于类k的时候,则rik=1,否则rik=0,uk为所有属于类k的样本的平均值,J表示样本点到该类样本点均值的距离和;由式(6)则即可得到第k类所有样本的平均值,由此,更新得到的模板为:Among them, i represents the i-th sample, when When it belongs to class k, then r ik =1, otherwise r ik =0, u k is the average value of all samples belonging to class k, and J represents the sum of distances from sample points to the average value of sample points of this class; by formula (6) Then the average value of all samples of the kth class can be obtained, thus, the updated template is: Tnew=[u1,u2,…,uk] (7)。T new = [u 1 , u 2 , . . . , u k ] (7).
CN201710625586.4A 2017-07-27 2017-07-27 Robust target tracking method based on self-adaptive simultaneous sparse representation Expired - Fee Related CN107844739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710625586.4A CN107844739B (en) 2017-07-27 2017-07-27 Robust target tracking method based on self-adaptive simultaneous sparse representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710625586.4A CN107844739B (en) 2017-07-27 2017-07-27 Robust target tracking method based on self-adaptive simultaneous sparse representation

Publications (2)

Publication Number Publication Date
CN107844739A true CN107844739A (en) 2018-03-27
CN107844739B CN107844739B (en) 2020-09-04

Family

ID=61683198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710625586.4A Expired - Fee Related CN107844739B (en) 2017-07-27 2017-07-27 Robust target tracking method based on self-adaptive simultaneous sparse representation

Country Status (1)

Country Link
CN (1) CN107844739B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858546A (en) * 2019-01-28 2019-06-07 北京工业大学 A kind of image-recognizing method based on rarefaction representation
CN110610527A (en) * 2019-08-15 2019-12-24 苏州瑞派宁科技有限公司 SUV calculation method, device, equipment, system and computer storage medium
CN110738683A (en) * 2018-07-19 2020-01-31 中移(杭州)信息技术有限公司 computer vision tracking method and device
CN112630774A (en) * 2020-12-29 2021-04-09 北京润科通用技术有限公司 Target tracking data filtering processing method and device
CN115861379A (en) * 2022-12-21 2023-03-28 山东工商学院 A video tracking method for target template updating based on Siamese network based on locally trusted templates

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012078702A1 (en) * 2010-12-10 2012-06-14 Eastman Kodak Company Video key frame extraction using sparse representation
CN103440645A (en) * 2013-08-16 2013-12-11 东南大学 Target tracking algorithm based on self-adaptive particle filter and sparse representation
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model
CN106203495A (en) * 2016-07-01 2016-12-07 广东技术师范学院 A kind of based on the sparse method for tracking target differentiating study
CN106651912A (en) * 2016-11-21 2017-05-10 广东工业大学 Compressed sensing-based robust target tracking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012078702A1 (en) * 2010-12-10 2012-06-14 Eastman Kodak Company Video key frame extraction using sparse representation
CN103440645A (en) * 2013-08-16 2013-12-11 东南大学 Target tracking algorithm based on self-adaptive particle filter and sparse representation
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model
CN106203495A (en) * 2016-07-01 2016-12-07 广东技术师范学院 A kind of based on the sparse method for tracking target differentiating study
CN106651912A (en) * 2016-11-21 2017-05-10 广东工业大学 Compressed sensing-based robust target tracking method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TIANZHU ZHANG ETAL.: "Robust visual tracking via multi-task sparse learning", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
张晓伟 等: "一种利用稀疏表示的距离扩展目标检测新方法", 《西安电子科技大学学报(自然科学版)》 *
黄晶晶: "基于压缩感知的多选择正交匹配追踪改进算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738683A (en) * 2018-07-19 2020-01-31 中移(杭州)信息技术有限公司 computer vision tracking method and device
CN109858546A (en) * 2019-01-28 2019-06-07 北京工业大学 A kind of image-recognizing method based on rarefaction representation
CN109858546B (en) * 2019-01-28 2021-03-30 北京工业大学 An Image Recognition Method Based on Sparse Representation
CN110610527A (en) * 2019-08-15 2019-12-24 苏州瑞派宁科技有限公司 SUV calculation method, device, equipment, system and computer storage medium
CN110610527B (en) * 2019-08-15 2023-09-22 苏州瑞派宁科技有限公司 SUV computing method, device, equipment, system and computer storage medium
CN112630774A (en) * 2020-12-29 2021-04-09 北京润科通用技术有限公司 Target tracking data filtering processing method and device
CN115861379A (en) * 2022-12-21 2023-03-28 山东工商学院 A video tracking method for target template updating based on Siamese network based on locally trusted templates
CN115861379B (en) * 2022-12-21 2023-10-20 山东工商学院 Video tracking method for updating templates based on local trusted templates by twin network

Also Published As

Publication number Publication date
CN107844739B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
Hu et al. Deep metric learning for visual tracking
CN103295242B (en) A kind of method for tracking target of multiple features combining rarefaction representation
CN107844739B (en) Robust target tracking method based on self-adaptive simultaneous sparse representation
CN104268539A (en) High-performance human face recognition method and system
CN105719285A (en) Pedestrian detection method based on directional chamfering distance characteristics
CN101499128A (en) Three-dimensional human face action detecting and tracing method based on video stream
CN103226835A (en) Target tracking method and system based on on-line initialization gradient enhancement regression tree
CN103886325A (en) Cyclic matrix video tracking method with partition
CN108830170B (en) End-to-end target tracking method based on layered feature representation
CN109146920A (en) A kind of method for tracking target that insertion type is realized
CN105608710A (en) Non-rigid face detection and tracking positioning method
CN106529526A (en) Object tracking algorithm based on combination between sparse expression and prior probability
Yuan et al. Multiple object detection and tracking from drone videos based on GM-YOLO and multi-tracker
Li et al. Robust object tracking via multi-feature adaptive fusion based on stability: contrast analysis
Zhan et al. Salient superpixel visual tracking with graph model and iterative segmentation
CN106056627A (en) Robustness object tracking method based on local identification sparse representation
Chen et al. Robust vehicle detection and viewpoint estimation with soft discriminative mixture model
Zhou et al. Locality-constrained collaborative model for robust visual tracking
CN108520529A (en) Visible light and infrared video target tracking method based on convolutional neural network
Han Image object tracking based on temporal context and MOSSE
Moridvaisi et al. An extended KCF tracking algorithm based on TLD structure in low frame rate videos
Qiu et al. A moving vehicle tracking algorithm based on deep learning
CN104517300A (en) Vision judgment tracking method based on statistical characteristic
Lian A novel real-time object tracking based on kernelized correlation filter with self-adaptive scale computation in combination with color attribution
CN118071932A (en) Three-dimensional static scene image reconstruction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200904