CN109859241B - Adaptive feature selection and time consistency robust correlation filtering visual tracking method - Google Patents

Adaptive feature selection and time consistency robust correlation filtering visual tracking method Download PDF

Info

Publication number
CN109859241B
CN109859241B CN201910019982.1A CN201910019982A CN109859241B CN 109859241 B CN109859241 B CN 109859241B CN 201910019982 A CN201910019982 A CN 201910019982A CN 109859241 B CN109859241 B CN 109859241B
Authority
CN
China
Prior art keywords
follows
regressor
constraint
visual tracking
time consistency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910019982.1A
Other languages
Chinese (zh)
Other versions
CN109859241A (en
Inventor
王菡子
梁艳杰
严严
刘祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201910019982.1A priority Critical patent/CN109859241B/en
Publication of CN109859241A publication Critical patent/CN109859241A/en
Application granted granted Critical
Publication of CN109859241B publication Critical patent/CN109859241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A robust correlation filtering visual tracking method based on adaptive feature selection and time consistency relates to a computer vision technology. The elastic network and the time consistency constraint are introduced into the relevant filtering learning at the same time, the discriminant feature can be selected in a self-adaptive mode to inhibit the interference feature, meanwhile, the learning and the updating of the model can be combined together, the problems that the traditional relevant filter is not strong in discriminant and degrades along with time can be effectively solved, and the robustness of the algorithm on shielding, deformation, rotation and background interference is improved. Through an elastic network and time consistency constraints, the correlation filter adaptively selects discriminant features which are continuous in time and have regional characteristics. The derived correlation-filtered learning problem can be solved by ADMM, which can be solved efficiently with only a few iterations. The method has the advantages of good performance, high precision and high speed.

Description

Adaptive feature selection and time consistency robust correlation filtering visual tracking method
Technical Field
The invention relates to a computer vision technology, in particular to a robust correlation filtering vision tracking method with adaptive feature selection and time consistency.
Background
The human body has high visual perception capability to the outside video, and the brain can quickly and accurately locate the moving target in the video. Computers are required to mimic the visual perception of the human brain to the extent that they can achieve human levels of speed and accuracy. Visual tracking is a fundamental problem in computer vision, and is the fundamental content of visual perception, and the speed and precision of the visual perception determine the real-time performance and precision of the visual perception. Target tracking is one of important research directions in the field of computer vision, and plays an important role in the fields of intelligent video monitoring, human-computer interaction, robot navigation, virtual reality, medical diagnosis, public safety and the like. The task first selects an object of interest in an initial frame of a video and then predicts the state of the object in the next successive frame. In addition, target tracking is a challenging task, and the target often changes in appearance (such as occlusion, deformation, rotation, etc.) during tracking, and is accompanied by complicated illumination changes, interference of similar targets in the background, and rapid movement of the target, which all make the task difficult. In recent years, a target tracking method based on correlation filtering and deep learning becomes a mainstream direction of current research due to good performance of the target tracking method.
Methods based on relevant filtering have become one of the research hotspots in the field of target tracking in recent years, and the methods have better speed advantage and obtain better results in a standard data set and various competitions. The proposal of the KCF opens the application hot trend of the related filtering in the field of target tracking. Subsequently, many researchers made improvements to KCF. To deal with the scale variation, DSST additionally trains a 1D correlation filter for scale estimation. In order to alleviate the boundary effect, the SRDCF and the CSR-DCF respectively introduce a spatial regularization term and a spatial response graph to punish relevant filter coefficients positioned outside a target area; the CACF uses the sample formed by the context and the original target sample together for training the relevant filter, thereby ensuring real-time and greatly improving the precision. To enhance the peaks of the response map, the RCF and PCF introduce different loss terms and regularization terms, respectively, during filter training. In the aspect of feature robustness, the stack effectively fuses complementary color histograms and HOG features together, and robust real-time tracking is realized. In order to realize long-range tracking, researchers propose LCT and MUSTer, and respectively adopt an SVM classifier trained on line and a feature point matching method to realize the re-detection of a target. In recent two years, some researchers have combined correlation filtering methods with other tracking methods to achieve complementary advantages. The LMCF introduces the related filtering into the Struck tracking framework, fully utilizes the characteristic of high speed of the related filtering and the characteristic of strong distinguishing capability of the Struck, and realizes the fast and robust tracking; the MCPF introduces correlation filtering into the tracking framework of particle filtering to effectively solve the scale variation problem. In addition, C-COT and ECO, which are improved versions of C-COT, extend discrete correlation filters to continuous convolution filters to achieve accurate tracking. The invention belongs to a target tracking method of relevant filtering.
In recent years, a method based on deep learning has become another research hotspot in the field of target tracking with its advantage of higher precision. At present, target tracking methods based on deep learning can be divided into three categories: the first type is to extract depth features from pre-trained CNN and apply them to the existing tracking method to improve tracking performance. The DeepSRDCF applies the first layer depth features extracted from the VGG-Net-16 to the SRDCF, the HCF applies the multilayer depth features extracted from the VGG-Net-19 to a related filtering tracking framework, and target tracking is achieved through fusion of multilayer response graphs. The characteristics extracted by CNN have better expressiveness, are superior to the traditional HOG and Color Naming characteristics, but have higher computational complexity. The second type is that the target tracking problem is converted into an instance retrieval problem, and a matching function used for instance retrieval is obtained by training external video data offline. SINT and SimFC solve the problem of deep similarity measurement by training twin networks offline; the CFNet adds a differentiable related filtering layer in the twin network to train end-to-end feature expression suitable for related filtering; DSiam trains a dynamic twin network with a continuous image sequence to adapt to apparent changes and background interference in the tracking process; EAST introduces reinforcement learning in twin networks to adaptively select depth features of a certain layer to achieve fast and robust tracking. This off-line training method is mostly capable of achieving real-time, but its accuracy depends on the network and data used for training. The third kind of deep learning-based method is to construct a deep network, select samples for off-line training, and fine-tune the network on-line to realize target tracking, and the representative method is MDNET. In addition, SANET distinguishes similar target interferences through RNN and ADNet adapts to complex tracking environments through reinforcement learning. The tracking performance of the method is greatly improved compared with the traditional tracking method, but the real-time target tracking effect is difficult to achieve. The invention belongs to a first target tracking method based on deep learning.
Disclosure of Invention
The invention aims to provide a self-adaptive feature selection and time consistency robust correlation filtering visual tracking method.
The invention comprises the following steps:
1) in the t-th frame, given the located target, a basic sample is constructed by the target and the surrounding background thereof, a training sample is composed of all cyclic translation samples of the basic sample, the corresponding label is determined by a Gaussian function, and the regressor is trained as follows:
Figure BDA0001939760030000021
wherein, is the convolution operation symbol, xtAnd ftTraining samples and filters for the t-th frame, y is a label determined by a Gaussian function, and D is the dimension of the feature;
2) since all the features (including discriminant features and interference features) extracted from the sample are used for training the regressor in step 1), in order to select the discriminant features and suppress the interference features, a sparse constraint is introduced into the regressor in step 1) to realize the pixel-level feature selection, so as to obtain the regressor with the sparse constraint as follows:
Figure BDA0001939760030000031
wherein | · | purple sweet1Is represented by1Norm, λ1Is a corresponding regularization term parameter;
3) in the visual tracking, since the discriminant features and the interference features are often concentrated in a certain area, in order to make the features have the area distribution characteristics, a square constraint is introduced into the regressor in step 2) to improve the robustness of the regressor, so that the regressor with the elastic network constraint is obtained as follows:
Figure BDA0001939760030000032
wherein | · | purple sweet2Is represented by2Norm, λ2Is a corresponding regularization term parameter;
4) in visual tracking, the regressors between adjacent frames have a time consistency characteristic, and in order to fully utilize this characteristic, a time consistency constraint is therefore introduced into the regressor in step 3), so as to obtain a regressor with an elastic network constraint and a time consistency constraint as follows:
Figure BDA0001939760030000033
wherein f ist-1And mu is a time regular term parameter for the correlation filter obtained by learning the t-1 frame.
5) Solving the regressor (correlation filter) in the step 4), introducing auxiliary variables, decomposing the problem to be solved into 3 subproblems by an Alternating Direction Multiplier Method (ADMM), wherein each subproblem has a closed solution, and performing iterative optimization for a few times to obtain the solution of the regressor (correlation filter), wherein the specific implementation process is as follows: first, an auxiliary variable is introduced
Figure BDA0001939760030000036
The problem to be optimized is transformed as follows:
Figure BDA0001939760030000034
then, an augmented Lagrange multiplier method is used to introduce the equality constraint into the objective function as follows:
Figure BDA0001939760030000035
wherein h istIs a Lagrange multiplier, gamma is a penalty factor due to L (f)t,gt,ht) Is convex, so ADMM can be used to alternately optimize the following sub-problems:
Figure BDA0001939760030000041
Figure BDA0001939760030000042
sub problem ft: given gtAnd htUsing paseuvrili to convert this sub-problem from spatial to frequency domain as follows:
Figure BDA0001939760030000043
wherein the content of the first and second substances,
Figure BDA0001939760030000044
a discrete fourier transform representing the respective variable; in the above equation, the kth vector of the correlation filter and training samples (each vector consisting of elements of the D channel) generates the kth element in the label, let pk(. h) represents the kth vector of the corresponding variable, the sub-problem transforms as follows:
Figure BDA0001939760030000045
deriving the above equation to be equal to zero yields:
Figure BDA0001939760030000046
wherein the content of the first and second substances,
Figure BDA0001939760030000047
due to the fact that
Figure BDA0001939760030000048
Is a matrix with rank 1, and the subproblem can be efficiently calculated by the Sherman-Morrison formula:
Figure BDA0001939760030000049
sub problem gt: given ftAnd htThe sub-problem is solved by adopting an iterative threshold shrinkage algorithm, and the global optimal solution is as follows:
Figure BDA00019397600300000410
wherein σ (·,) represents a contraction operator;
sub problem ht: given ftAnd gtCalculate the 3 rd equation in (formula seven) and update the penalty factor as follows:
γi=max(γmin,ργi-1) (formula thirteen)
Wherein, γminIs the minimum value of γ, ρ is a scale factor, and γ is the minimum value of γ when i is 10=γini
6) For the t +1 frame video, a target positioned by the t frame is taken as a center, a test sample is constructed on a plurality of scales, and a learned correlation filter is used for detection, so that the target state of the t +1 frame video is obtained, and the specific implementation process is as follows: firstly, taking the position of the target of the previous frame as the center, cutting a multi-scale area, extracting multi-channel characteristics, and marking the multi-channel characteristics as
Figure BDA0001939760030000051
Then, the response plot of the correlation filter at the scale s is calculated as follows:
Figure BDA0001939760030000052
wherein, F and F-1indicating FFT and IFFT, ⊙ for bit-wise multiplication, and finally, determining the position and size of the object by searching for the maximum value on the S response maps.
In step 2), the regularization term parameter λ1=0.01。
In step 3), the regularization term parameter λ2=3.0×10-5
In step 4), the temporal regularization term parameter μ ═ 20.
In step 5), the γini=1,γminAt 0.01, ρ 0.01, ADMM iterates 2 times.
In step 6), S ═ 5.
According to the method, the elastic network and the time consistency constraint are introduced into the relevant filtering learning at the same time, the discriminant feature can be selected in a self-adaptive manner to inhibit the interference feature, meanwhile, the learning and updating of the model can be combined together, the problems of poor discriminant and time degradation of the traditional relevant filter can be effectively solved, and the robustness of the algorithm on shielding, deformation, rotation and background interference is improved. Through an elastic network and time consistency constraints, the correlation filter adaptively selects discriminant features which are continuous in time and have regional characteristics. The derived correlation-filtered learning problem can be solved by ADMM, which can be solved efficiently with only a few iterations. Experiments are carried out on various challenging data sets (OTB-2013, OTB-2015, sample-Color, UAV-123 and VOT-2016), and the results show that the method can obtain better performance, and is high in precision and speed. Specifically, on the OTB-2015 dataset, the tracker AUC scored 68.5% with a velocity of about 10FPS when using the CNN feature; when artificial features were used, the AUC score of the tracker was 65.8% with a velocity of approximately 20 FPS.
Drawings
FIG. 1 is a graph comparing qualitative results of Dragonbaby and Soccer with other target tracking methods according to the embodiment of the present invention.
Detailed Description
The method of the present invention is described in detail below with reference to the accompanying drawings and examples.
The embodiment of the invention comprises the following steps:
A. in the t-th frame, given the located target, a basic sample is constructed by the target and the surrounding background thereof, a training sample is composed of all cyclic translation samples of the basic sample, the corresponding label is determined by a Gaussian function, and the regressor is trained as follows:
Figure BDA0001939760030000061
wherein, is the convolution operation symbol, xtAnd ftThe training samples and filters for the t-th frame, y is the label determined by the gaussian function, and D is the dimension of the feature.
B. Since all the features (including discriminant features and interference features) extracted from the samples are used to train the regressor in step a, in order to select the discriminant features while suppressing the interference features, a sparse constraint is introduced into the regressor in step a to achieve the pixel-level feature selection, so as to obtain the regressor with the sparse constraint as follows:
Figure BDA0001939760030000062
wherein | · | purple sweet1Is represented by1Norm, λ1Are the corresponding regularization term parameters.
C. In the visual tracking, since the discriminant features and the interference features are often concentrated in a certain region, in order to make the features have the region distribution characteristics, a square constraint is further introduced into the regressor in step B to further improve the robustness of the regressor, so that the regressor with the elastic network constraint is obtained as follows:
Figure BDA0001939760030000063
wherein | · | purple sweet2Is represented by2Norm, λ2Are the corresponding regularization term parameters.
D. In the visual tracking, the regressor between adjacent frames has a time consistency characteristic, and in order to fully utilize the characteristic, a time consistency constraint is further introduced into the regressor in the step C, so that the regressor with the elastic network constraint and the time consistency constraint is obtained as follows:
Figure BDA0001939760030000064
wherein f ist-1And mu is a time regular term parameter for the correlation filter obtained by learning the t-1 frame.
E. And D, solving the regressor (correlation filter) in the step D, introducing auxiliary variables, decomposing the problem to be solved into three subproblems by an Alternating Direction Multiplier Method (ADMM), wherein each subproblem has a closed solution, and obtaining the solution of the regressor (correlation filter) by a few times of iterative optimization. The specific implementation process is as follows: first, an auxiliary variable g is introducedt dThe problem to be optimized is transformed as follows:
Figure BDA0001939760030000071
then, an augmented Lagrange multiplier method is used to introduce the equality constraint into the objective function as follows:
Figure BDA0001939760030000072
wherein h istIs lagrange multiplier and gamma is penalty factor. Due to L (f)t,gt,ht) Is convex, so ADMM can be used to alternately optimize the following sub-problems:
Figure BDA0001939760030000073
Figure BDA0001939760030000074
sub problem ft: given gtAnd htUsing paseuvrili to convert this sub-problem from spatial to frequency domain as follows:
Figure BDA0001939760030000075
wherein the content of the first and second substances,
Figure BDA0001939760030000076
representing the discrete fourier transform of the corresponding variable. In the above equation, the k-th vector of the correlation filter and training samples (each vector consisting of elements of the D channel) generates the k-th element in the label. Let p bek(. h) represents the kth vector of the corresponding variable, the sub-problem transforms as follows:
Figure BDA0001939760030000077
deriving the above equation to be equal to zero yields:
Figure BDA0001939760030000081
wherein the content of the first and second substances,
Figure BDA0001939760030000082
due to the fact that
Figure BDA0001939760030000083
Is a matrix with rank 1, and the subproblem can be efficiently calculated by the Sherman-Morrison formula:
Figure BDA0001939760030000084
sub problem gt: given ftAnd htAn iterative threshold shrinkage algorithm is used to solve the subproblem. The global optimal solution is as follows:
Figure BDA0001939760030000085
where σ (·,) represents the contraction operator.
Sub problem ht: given ftAnd gtThe third equation in (equation seven) is calculated and the penalty factor is updated as follows:
γi=max(γmin,ργi-1) (formula thirteen)
Wherein, γminIs the most significant of gammaSmall value, ρ is a scale factor, γ when i is 10=γini
F. And for the t +1 frame video, constructing test samples on a plurality of scales by taking the target positioned by the t frame as the center, and detecting by using the learned related filter to obtain the target state of the t +1 frame video. The specific implementation process is as follows: firstly, taking the position of the target of the previous frame as the center, cutting a multi-scale area, extracting multi-channel characteristics, and marking the multi-channel characteristics as
Figure BDA0001939760030000086
Then, the response plot of the correlation filter at the scale s is calculated as follows:
Figure BDA0001939760030000087
wherein, F and F-1indicating FFT and IFFT, respectively, indicating bit multiplication, and finally, determining the position and size of the object by searching for the maximum value on the S response maps.
In step 2), the regularization term parameter λ1=0.01。
In step 3), the regularization term parameter λ2=3.0×10-5
In step 4), the temporal regularization term parameter μ ═ 20.
In step 5), the γini=1,γminAt 0.01, ρ 0.01, ADMM iterates 2 times.
In step 6), S ═ 5.
Fig. 1 shows a comparison of qualitative results of Dragonbaby and Soccer in the embodiment of the present invention and other target tracking methods, and the method of the present invention can always track a target well.
TABLE 1
Figure BDA0001939760030000091
Table 1 shows the accuracy and success rate of the comparison of the OTB100 data set with several other target tracking methods. Wherein ADTCF (using artificial features) and DeepADTCF (using CNN features) are the methods of the invention.
STRCF and DeepsTRCF correspond to the methods proposed by Feng Li et al (Feng Li, Cheng Tian, Wangmeng Zuo, Lei Zhang, Ming-Hsua Yang. "left Spatial-Temporal regulated chromatography Filters for Visual tracking." CVPR 2018.);
ECO _ HC and ECO correspond to the methods proposed by Martin Danelljan et al (Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg. "ECO: Efficient Convolition operators for tracking." CVPR 2017.);
CCOT corresponds to the method proposed by Martin Danelljan et al (Martin Danelljan, AndreasObinson, Fahad Khan, Michael Felsberg. "Beyond correction Filters: learning continuous conversion Operators for Visual tracking." ECCV 2016.);
MCPF corresponds to the method proposed by Tianzhu Zhuang et al (Tianzhu Zhuang, Changsheng Xu, Ming-Hsuan Yang. "Multi-Task correction Particle Filter for Robust object tracking." CVPR 2017);
BACF corresponds to the method proposed by Hamed Kiani Galoogahi et al (Hammon Kiani Galoogahi, Ashton Fagg, Simon lucey. "Learning Background-Aware Correlation Filters for visual tracking." ICCV 2017.);
the Stacke _ CA corresponds to the method proposed by Matthias Mueller et al (Matthias Mueller, NeilSmith, Bernard Ghanem. "Context-Aware Correlation Filter tracking." CVPR2017 oral).

Claims (6)

1. The adaptive feature selection and time consistency robust correlation filtering visual tracking method is characterized by comprising the following steps of:
1) in the t-th frame, given the located target, a basic sample is constructed by the target and the surrounding background thereof, a training sample is composed of all cyclic translation samples of the basic sample, the corresponding label is determined by a Gaussian function, and the regressor is trained as follows:
Figure FDA0002585075150000011
wherein, is the convolution operation symbol, xtAnd ftTraining samples and filters for the t-th frame, y is a label determined by a Gaussian function, and D is the dimension of the feature;
2) since all the features extracted from the sample include discriminant features and interference features, which are all used to train the regressor in step 1), in order to select the discriminant features while suppressing the interference features, a sparse constraint is introduced into the regressor in step 1) to realize the pixel-level feature selection, so as to obtain the regressor with the sparse constraint as follows:
Figure FDA0002585075150000012
wherein | · | purple sweet1Denotes the quadratic constraint, λ1Is a corresponding regularization term parameter;
3) in the visual tracking, since the discriminant features and the interference features are often concentrated in a certain area, in order to make the features have the area distribution characteristics, a square constraint is introduced into the regressor in step 2) to improve the robustness of the regressor, so that the regressor with the elastic network constraint is obtained as follows:
Figure FDA0002585075150000013
wherein | · | purple sweet2Denotes the quadratic constraint, λ2Is a corresponding regularization term parameter;
4) in visual tracking, the regressors between adjacent frames have a time consistency characteristic, and in order to fully utilize this characteristic, a time consistency constraint is therefore introduced into the regressor in step 3), so as to obtain a regressor with an elastic network constraint and a time consistency constraint as follows:
Figure FDA0002585075150000014
wherein f ist-1Learning for the t-1 th frameMu is a time regular term parameter of the obtained correlation filter;
5) solving the regressor in the step 4), introducing auxiliary variables, decomposing the problem to be solved into 3 subproblems by an alternating direction multiplier method, wherein each subproblem has a closed solution, and the solution of the regressor is obtained by a few iterative optimizations, and the specific implementation process is as follows: first, an auxiliary variable is introduced
Figure FDA0002585075150000021
The problem to be optimized is transformed as follows:
Figure FDA0002585075150000022
then, an augmented Lagrange multiplier method is used to introduce the equality constraint into the objective function as follows:
Figure FDA0002585075150000023
wherein h istIs a Lagrange multiplier, gamma is a penalty factor due to L (f)t,gt,ht) Is convex, so ADMM is used to alternately optimize the following sub-problems:
Figure FDA0002585075150000024
sub problem ft: given gtAnd htThe use of the paseuler theorem to transform the sub-problem from the spatial domain to the frequency domain is as follows:
Figure FDA0002585075150000025
wherein the content of the first and second substances,
Figure FDA0002585075150000026
a discrete fourier transform representing the respective variable; in the above equation, the sum of the correlation filtersThe kth vector of the training sample generates the kth element in the label, let pk(. h) represents the kth vector of the corresponding variable, the sub-problem transforms as follows:
Figure FDA0002585075150000027
the derivation of the above equation is equal to zero:
Figure FDA0002585075150000028
wherein the content of the first and second substances,
Figure FDA0002585075150000029
due to the fact that
Figure FDA00025850751500000210
Is a matrix with the rank of 1, and the subproblem is efficiently calculated by the Sherman-Morrison formula:
Figure FDA0002585075150000031
sub problem gt: given ftAnd htThe sub-problem is solved by adopting an iterative threshold shrinkage algorithm, and the global optimal solution is as follows:
Figure FDA0002585075150000032
wherein σ (·,) represents a contraction operator;
sub problem ht: given ftAnd gtCalculate the 3 rd equation in (formula seven) and update the penalty factor as follows:
γi=max(γmin,ργi-1) (formula thirteen)
Wherein, γminIs the minimum value of γ, ρ is a scale factor, and γ is the minimum value of γ when i is 10=γini
6) For the t +1 frame video, a target positioned by the t frame is taken as a center, a test sample is constructed on a plurality of scales, and a learned correlation filter is used for detection, so that the target state of the t +1 frame video is obtained, and the specific implementation process is as follows: firstly, taking the position of the target of the previous frame as the center, cutting a multi-scale area, extracting multi-channel characteristics, and marking the multi-channel characteristics as
Figure FDA0002585075150000033
Then, the response plot of the correlation filter at the scale s is calculated as follows:
Figure FDA0002585075150000034
wherein, F and F-1indicating FFT and IFFT, ⊙ for bit-wise multiplication, and finally, determining the position and size of the object by searching for the maximum value on the S response maps.
2. The adaptive feature selection and time consistency robust correlation filtering visual tracking method as claimed in claim 1, wherein in step 2), the regularization term parameter λ1=0.01。
3. The adaptive feature selection and time consistency robust correlation filtering visual tracking method as claimed in claim 1, wherein in step 3), the regularization term parameter λ2=3.0×10-5
4. The adaptive feature selection and temporal consistency robust correlation filtering visual tracking method according to claim 1, wherein in step 4), the temporal regularization term parameter μ ═ 20.
5. The adaptive feature selection and time consistency robust correlation filtering visual tracking method as claimed in claim 1, wherein in step 5), the γ isini=1,γminAt 0.01, ρ 0.01, ADMM iterates 2 times.
6. The adaptive feature selection and temporal consistency robust correlation filtering visual tracking method according to claim 1, wherein in step 6), the S-5.
CN201910019982.1A 2019-01-09 2019-01-09 Adaptive feature selection and time consistency robust correlation filtering visual tracking method Active CN109859241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910019982.1A CN109859241B (en) 2019-01-09 2019-01-09 Adaptive feature selection and time consistency robust correlation filtering visual tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910019982.1A CN109859241B (en) 2019-01-09 2019-01-09 Adaptive feature selection and time consistency robust correlation filtering visual tracking method

Publications (2)

Publication Number Publication Date
CN109859241A CN109859241A (en) 2019-06-07
CN109859241B true CN109859241B (en) 2020-09-18

Family

ID=66894280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910019982.1A Active CN109859241B (en) 2019-01-09 2019-01-09 Adaptive feature selection and time consistency robust correlation filtering visual tracking method

Country Status (1)

Country Link
CN (1) CN109859241B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378932B (en) * 2019-07-10 2023-05-12 上海交通大学 Correlation filtering visual tracking method based on spatial regularization correction
CN110428447B (en) * 2019-07-15 2022-04-08 杭州电子科技大学 Target tracking method and system based on strategy gradient
CN110533689A (en) * 2019-08-08 2019-12-03 河海大学 Core correlation filtering Method for Underwater Target Tracking based on space constraint adaptive scale
CN113066102A (en) * 2020-01-02 2021-07-02 四川大学 Correlation filtering tracking method combining adaptive spatial weight and distortion suppression
CN111967485B (en) * 2020-04-26 2024-01-05 中国人民解放军火箭军工程大学 Air-ground infrared target tracking method based on probability hypergraph learning
CN111862167B (en) * 2020-07-21 2022-05-10 厦门大学 Rapid robust target tracking method based on sparse compact correlation filter
CN113409357B (en) * 2021-04-27 2023-10-31 中国电子科技集团公司第十四研究所 Correlated filtering target tracking method based on double space-time constraints
CN113470074B (en) * 2021-07-09 2022-07-29 天津理工大学 Self-adaptive space-time regularization target tracking method based on block discrimination

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200237A (en) * 2014-08-22 2014-12-10 浙江生辉照明有限公司 High speed automatic multi-target tracking method based on coring relevant filtering
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
CN108961312A (en) * 2018-04-03 2018-12-07 奥瞳系统科技有限公司 High-performance visual object tracking and system for embedded vision system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034094B (en) * 2010-12-14 2012-11-07 浙江大学 Digital ball identification method based on sparse representation and discriminant analysis
CN103284694B (en) * 2013-05-22 2015-01-21 西安电子科技大学 Method for quantitative analysis for angiogenesis of living animals
CN104062968A (en) * 2014-06-10 2014-09-24 华东理工大学 Continuous chemical process fault detection method
CN105224918B (en) * 2015-09-11 2019-06-11 深圳大学 Gait recognition method based on bilinearity joint sparse discriminant analysis
US10209081B2 (en) * 2016-08-09 2019-02-19 Nauto, Inc. System and method for precision localization and mapping
KR102366779B1 (en) * 2017-02-13 2022-02-24 한국전자통신연구원 System and method for tracking multiple objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200237A (en) * 2014-08-22 2014-12-10 浙江生辉照明有限公司 High speed automatic multi-target tracking method based on coring relevant filtering
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
CN108961312A (en) * 2018-04-03 2018-12-07 奥瞳系统科技有限公司 High-performance visual object tracking and system for embedded vision system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Correlation Filter Learning Toward Peak Strength for Visual Tracking;Yao Sui et al.;《IEEE TRANSACTIONS ON CYBERNETICS》;20180430;第48卷(第4期);第1290-1303页 *
Learning Adaptive Discriminative Correlation Filters via Temporal Consistency Preserving Spatial Feature Selection for Robust Visual Object Tracking;Tianyang Xu et al.;《arXiv:1807.11348v1》;20180730;第1-13页 *
基于特征选择与时间一致性稀疏外观模型的目标追踪算法;张伟东 等;《模式识别与人工智能》;20180331;第31卷(第3期);第245-255页 *

Also Published As

Publication number Publication date
CN109859241A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109859241B (en) Adaptive feature selection and time consistency robust correlation filtering visual tracking method
CN111354017B (en) Target tracking method based on twin neural network and parallel attention module
CN111311647B (en) Global-local and Kalman filtering-based target tracking method and device
CN111582349B (en) Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering
Lu et al. Learning transform-aware attentive network for object tracking
CN109543615B (en) Double-learning-model target tracking method based on multi-level features
CN110276784B (en) Correlation filtering moving target tracking method based on memory mechanism and convolution characteristics
CN110472577B (en) Long-term video tracking method based on adaptive correlation filtering
Wang et al. Visual object tracking with multi-scale superpixels and color-feature guided kernelized correlation filters
CN109410249B (en) Self-adaptive target tracking method combining depth characteristic and hand-drawn characteristic
CN109727272B (en) Target tracking method based on double-branch space-time regularization correlation filter
Zhu et al. Tiny object tracking: A large-scale dataset and a baseline
Yang et al. Visual tracking with long-short term based correlation filter
Sheng et al. Robust visual tracking via an improved background aware correlation filter
Du et al. Spatial–temporal adaptive feature weighted correlation filter for visual tracking
Ding et al. Machine learning model for feature recognition of sports competition based on improved TLD algorithm
CN108898621B (en) Related filtering tracking method based on instance perception target suggestion window
CN110555864A (en) self-adaptive target tracking method based on PSPCE
Huang et al. SVTN: Siamese visual tracking networks with spatially constrained correlation filter and saliency prior context model
Liang et al. Robust correlation filter tracking with shepherded instance-aware proposals
CN111862167A (en) Rapid robust target tracking method based on sparse compact correlation filter
CN116342653A (en) Target tracking method, system, equipment and medium based on correlation filter
CN113298136B (en) Twin network tracking method based on alpha divergence
CN113033356A (en) Scale-adaptive long-term correlation target tracking method
CN112598710A (en) Space-time correlation filtering target tracking method based on feature online selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant