CN110378932A - A kind of correlation filtering Vision Tracking based on the correction of space canonical - Google Patents

A kind of correlation filtering Vision Tracking based on the correction of space canonical Download PDF

Info

Publication number
CN110378932A
CN110378932A CN201910620063.XA CN201910620063A CN110378932A CN 110378932 A CN110378932 A CN 110378932A CN 201910620063 A CN201910620063 A CN 201910620063A CN 110378932 A CN110378932 A CN 110378932A
Authority
CN
China
Prior art keywords
correction
correlation filtering
target
tracking
canonical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910620063.XA
Other languages
Chinese (zh)
Other versions
CN110378932B (en
Inventor
龙承念
杨招兵
邬晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201910620063.XA priority Critical patent/CN110378932B/en
Publication of CN110378932A publication Critical patent/CN110378932A/en
Application granted granted Critical
Publication of CN110378932B publication Critical patent/CN110378932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of correlation filtering Vision Tracking based on the correction of space canonical, include the following steps: that step 1) extracts the target signature of present frame target position;Step 2) calculates response score according to target signature, positions target position and obtains target scale;The correction of step 3) space canonical;Step 4) model training updates, and extracts feature again in the new target position peripheral region of prediction, is input to correlation filtering frame, solves a ridge regression model, obtain correlation filtering.The present invention makes full use of the speed and precision of correlation filtering track algorithm, the effect that excavated space canonical plays in correlation filtering frame, to propose the feedback correction of space canonical punishment matrix, using a kind of correcting means of optical flow tracking, it incorporates in the frame of space canonical correction, while considering that timeliness proposes that a kind of activation mechanism goes the closure of control optical flow tracking method to accelerate tracking velocity to save computing resource.

Description

A kind of correlation filtering Vision Tracking based on the correction of space canonical
Technical field
The present invention relates to machine learning and image procossing crossing domain, specifically, being related specifically to a kind of based on space The correlation filtering Vision Tracking of canonical correction.
Background technique
Visual target tracking is a key component in computer vision field, it knows in such as human-computer interaction, behavior Not Deng fields suffer from extensive use.The target of visual target tracking is, in the mesh of a given video pictures sequence initial frame After cursor position coordinate, position of the target in picture is correctly continuously estimated in next all frames.Formally due to mesh The diversification of mark tracking scene, target following is still a challenging task.
In recent years, the Vision Tracking based on correlation filtering achieves outstanding performance in target tracking domain Operation by doing the operation of cycle sliding window to training sample, then is transformed into Fourier and is quickly calculated by performance, can be with Accelerate the positioning of target.
But the method for correlation filtering traditional so is all the position prediction of an open loop, lacks correction mechanism, once Target is lost during tracking, then the calculation meal of correlation filtering is difficult to be further continued for keeping to target in next tracking Tracking.Also, it is such as blocking, in the tracking of complex scenes such as distortion and motion blur, target is to be easy to lose again 's.
In addition to this, in order to reach real-time tracking, the algorithm improvement of correlation filtering also must be considered that tracking velocity.
To sum up, although the track algorithm of correlation filtering has been achieved for very big progress, due to the robust of algorithm Property and real-time reason, the tracking effect under many complex scenes is not still good enough, still there is room for promotion.
Summary of the invention
It is an object of the invention to robustness caused by the open loop property for traditional correlation filtering is not strong, Yi Jishi The bad problem of when property provides a kind of correlation filtering Vision Tracking based on the correction of space canonical, first with correlation The characteristic of space canonical optimization tracking result, proposes a kind of correlation filtering frame based on the correction of space canonical in filter frame, Then the correcting that fusion optical flow tracking result is corrected as space canonical, forms the correction mechanism an of closed loop.In addition, setting The activation mechanism and a kind of improved ADMM derivation algorithm for having counted a kind of closed loop correction mechanism go to increase the computational efficiency tracked, mention High tracking velocity, to solve problems of the prior art.
Technical problem solved by the invention can be realized using following technical scheme:
A kind of correlation filtering Vision Tracking based on the correction of space canonical, includes the following steps:
The target signature of step 1) extraction present frame target position;
Step 2) calculates response score according to target signature, positions target position and obtains target scale;
The correction of step 3) space canonical;
Step 4) model training updates, and extracts feature again in the new target position peripheral region of prediction, is input to phase Filter frame is closed, a ridge regression model is solved, obtains correlation filtering.
Further, the implementation method of the step 1) is as follows:
The target signature of extraction is Hog feature, and Hog feature basic-cell dimensions are set as 4, and characteristic dimension is set as 31, Extracting region is 2.5 times of regions around present frame target frame position, and present frame target position and area size are using upper one Frame target position location and size;
Meanwhile multi-scale transform is carried out to the target signature currently extracted, Analysis On Multi-scale Features pyramid is formed, is adopted here The scale number 1 taken is 5, and scale factor is set as 1.01, and what change of scale was taken is that the mode of interpolation and down-sampling is realized.
Further, the implementation method of the step 2) is as follows:
Windowization operation is done according to the Analysis On Multi-scale Features extracted in step 1) first, i.e., with the Cosine Window matrix of identical size Dot product is done with scale feature;
Then the Fourier using Fast Fourier Transform (FFT) by the Feature Conversion of the spatial domain to Fourier, after window Feature carries out related operation with the correlation filter learnt in previous frame again, obtains the response score about target position;
The multiple dimensioned response thermal map about target position finally can be obtained using inverse Fourier transform, by finding response The maximum spatial position of score is used as present frame target to navigate to the scale of target's center position and best match The prediction of position and scale size.
Further, the step 3) uses optical flow tracking method, is accomplished by
Firstly, beginning to extract the Optical-flow Feature around previous frame target, using LK after activation space canonical correction condition Optical flow algorithm goes to prediction present frame target position, and and then, the position predicted by optical flow method, generating one with the position is The Gauss of the heart punishes matrix;
N(ρc[1],ρc[2],ησ2,ησ2,1) (1)
Wherein, (ρc[1],ρc[2]) target location coordinate of optical flow method prediction, σ are indicated2Indicate that variance, the value of σ are set as 1.3, η refer to the confidence coefficient of correlation filtering tracking, determine that η is bigger by responding score variation tendency in real time, and algorithm filters correlation The confidence level of wave tracking result is higher, conversely, higher to the confidence level of optical flow tracking result;η is defined as:
Wherein, τ is penalty factor of the adjustment η to desired value, default settings 20, ε expression correction threshold value, default Value is 1/2, RTIt is present frame peak response score, RpeakIt is the peak value of peak response score in past all historical frames, anticipates here Taste, when responding score lower than some threshold value is arrived, space canonical correction mechanism will be activated, otherwise, no space canonical Correction mechanism, track algorithm still maintain old correlation filtering tracking frame.
Further, the correction matrix of the optical flow tracking method will affect the space canonical in former correlation filtering tracker It punishes matrix, correction fusion need to be carried out, correction blending algorithm is as follows:
Wherein, N (ρ [1], ρ [2], σ22, 1) and indicate that the Gauss centered on former correlation filtering tracking position of object punishes Matrix, the target location coordinate that (ρ [1], ρ [2]) i.e. former correlation filtering tracks, Crop () is a size normalised sanction Cut operation, it is therefore an objective to make punishment matrix size consistent with characteristic size.
Further, the implementation method that the step 4) model training updates is as follows:
The present frame target position obtained according to step 2) redefines the feature of target area, instructs as correlation filter Space canonical correcting unit in experienced calibration feature and step 3) punishes square as the space canonical in correlation filtering training Battle array, can solve following loss function as a result:
Learn correlation filter f out, wherein d refers to the number in feature channel, and D refers to total feature channel number, is here 31, in addition,It is the space regular terms after correction, ‖ f-ft-12It is a transient state regular terms, μ Transient state regular coefficient and a learning parameter, setting value 15, the mode that the present invention takes ADMM to iteratively solve, so weight It is as follows for writing formula (4):
Wherein, g and h is the auxiliary variable for constituting Lagrangian increment in above formula, is then calculated using ADMM iteration
The above problem is become following three subproblems of solution by method can be completed solution:
Compared with prior art, the beneficial effects of the present invention are:
1. the present invention takes full advantage of effect of the space canonical in correlation filtering tracking frame, the operation of correction is introduced Into space regular terms, and then correcting filter is influenced, further the tracing positional of target is corrected.So, in the past Traditional correlation filtering frame closed loop is switched to by open loop, can cope with the tracking of numerous complicated scene, improve in tracking performance Robustness.
2. what the present invention took in correlation filtering study is Hog feature, this feature compares it to the consumption of calculation amount His complex characteristic is smaller, it is possible to improve tracking efficiency, guarantee real-time.Meanwhile in corrective operations, what is taken is light stream Feature carries out optical flow tracking, and this point increases optical flow method and tracks and can preferably cope with also for different with Hog feature The drift of model caused by motion blur, is also to have the effect of supplying to whole tracking frame.
It, will 3. the present invention in the study solution procedure of correlation filter, uses improved ADMM algorithm iteration and solves Non- enclosed Solve problems originally are transformed to 3 enclosed Solve problems, can save during model training updates again a large amount of Unnecessary iterative operation, is quickly effectively solved, this also has excellent facilitation to the tracking efficiency entirely tracked.
The present invention compared to traditional correlation filtering track algorithm and other classes algorithm, in robustness and high efficiency On suffer from splendid superiority, the tracking under complex scene often can also obtain outstanding tracking performance.
Detailed description of the invention
Fig. 1 is the flow chart of the correlation filtering Vision Tracking of the present invention based on the correction of space canonical.
Fig. 2 is the pictorial diagram of space canonical of the present invention correction.
Fig. 3 is correction activation mechanism schematic diagram of the present invention.
Fig. 4 is common correlation filtering Image Tracking Algorithms Performance comparison schematic diagram under the conditions of OPE of the present invention
Fig. 5 is to have correction correlation filtering under three data sets (Biker, BlurBody, Deer) of selection of the present invention With the tracking effect figure without correction correlation filtering tracking.
Specific embodiment
To be easy to understand the technical means, the creative features, the aims and the efficiencies achieved by the present invention, below with reference to Specific embodiment, the present invention is further explained.
Present invention utilizes the characteristics of canonical optimization tracking result between correlation filtering frame hollow, propose a kind of based on space Then the correlation filtering frame of canonical correction merges the correcting that optical flow tracking result is corrected as space canonical, forms one The correction mechanism of a closed loop.In addition, devising a kind of activation mechanism of closed loop correction mechanism and a kind of improved ADMM is solved and calculated Method goes to increase the computational efficiency tracked, improves tracking velocity.
Rudimentary algorithm process is as shown in Fig. 1, specifically includes the following steps:
1. extracting target signature.
2. calculating response score according to the feature that present frame extracts, positioning target position and obtaining target scale.
3. space canonical is corrected.
4. model training updates, feature is extracted again in the new target position peripheral region of prediction, is input to related filter Wave frame solves a ridge regression model, obtains correlation filter.
It repeats the above steps, is continuously tracked.
Specific steps can be specifically described as follows referring to the graphic process schematic diagram of attached drawing 2:
S11, target's feature-extraction Hog feature, Hog feature basic-cell dimensions are set as 4, and characteristic dimension is set as 31.
S12, to extract region be 2.5 times of regions around present frame target frame (rectangle frame) position, present frame target position and The position and size that area size is positioned using previous frame target.Meanwhile the target signature currently extracted is carried out more Change of scale forms Analysis On Multi-scale Features pyramid.Here the scale number taken is 5, and scale factor is set as 1.01.Scale Converting take is that the mode of interpolation and down-sampling is realized.
S21, first according to the scale feature extracted in step 1 do windowization operation, i.e., with the Cosine Window square of identical size Battle array and scale feature do dot product.Then utilize Fast Fourier Transform (FFT) by the Feature Conversion of the spatial domain to Fourier.
Fourier feature after S22, window carries out related operation with the correlation filter learnt in previous frame again, obtains To the response score about target position.The multiple dimensioned response heat about target position can be obtained using inverse Fourier transform Figure.By finding the response maximum spatial position of score, to navigate to the scale of target's center position and best match, i.e., Prediction as present frame target position and scale size.
Progress optical flow tracking method synchronous with step 2 in S31, step 3.Firstly, after activation space canonical correction condition, It begins to extract the Optical-flow Feature around previous frame target, goes to prediction present frame target position with LK optical flow algorithm.
S32, and then, the position predicted by optical flow method generate a Gauss centered on the position and punish matrix
N(ρc[1],ρc[2],ησ2,ησ2,1) (1)
Wherein, (ρc[1],ρc[2]) target location coordinate of optical flow method prediction, σ are indicated2Indicate that variance, the value of σ are set as 1.3.Wherein η refers to the confidence coefficient of correlation filtering tracking, determines that η is bigger by responding score variation tendency in real time, algorithm is to phase The confidence level for closing filter tracking result is higher, conversely, higher to the confidence level of optical flow tracking result.Also, η is defined as:
Wherein, τ is penalty factor of the adjustment η to desired value, default settings 20, ε expression correction threshold value, default Value isRTIt is present frame peak response score, RpeakIt is the peak value of peak response score in past all historical frames.Such as attached drawing Shown in 3, during each frame currently responds score historical frames with before and responds score relatively, when response score is lower than to certain When a threshold value, space canonical correction mechanism will be activated, and otherwise, no space canonical corrects mechanism, and track algorithm is still protected Hold old correlation filtering tracking frame.
S33, optical flow tracking correction matrix will affect the space canonical in former correlation filtering tracker punishment matrix, carry out Fusion, blending algorithm are as follows:
Wherein, N (ρ [1], ρ [2], σ22, 1) and indicate that the Gauss centered on former correlation filtering tracking position of object punishes Matrix, the target location coordinate that (ρ [1], ρ [2]) i.e. former correlation filtering tracks.Crop () is a size normalised sanction Cut operation, it is therefore an objective to make punishment matrix size consistent with characteristic size.
S41, the present frame target position obtained according to step 2 redefine the feature of target area, as correlation filtering The calibration feature of device training, there are also the space canonical correcting units in step 3 to punish as the space canonical in correlation filtering training Matrix is penalized, following loss function can be solved as a result,
Learn correlation filter f out, wherein d refers to the number in feature channel, and D refers to total feature channel number, is here 31.In addition,It is the space regular terms after correction, ‖ f-ft-12It is a transient state regular terms, μ Transient state regular coefficient and a learning parameter, setting value 15.
S42, the solution for loss function, the mode that the present invention takes ADMM to iteratively solve, so first rewriting formula (4) is It is as follows:
Wherein, g and h is the auxiliary variable for constituting Lagrangian increment in above formula, and then the present invention is calculated using ADMM iteration Method becomes the above problem to solve following three subproblems:
Then, by continuous iteration, correlation filter can be solved, the target for next frame positions.
So, under the conditions of the OPE of OTB100 data set, compare inventive algorithm such as attached drawing 4 and other are several outstanding Correlation filtering track algorithm such as KCF, ECO-HC, STRCF etc. illustrate inventive algorithm in robustness and accuracy It is better than other several algorithms.
Attached drawing 5 illustrates the correlation filtering after adding correction mechanism proposed by the present invention and same correction is not added together The correlation filtering of mechanism compares, and finds under some scenes for motion blur or image fault occur, and of the invention rectifys It is more robust that positive mechanism all effectively helps correlation filtering to handle.
Generally speaking, the present invention compared to traditional correlation filtering track algorithm and other classes algorithm, in robustness With splendid superiority is suffered from high efficiency, the tracking under complex scene often can also obtain outstanding tracking performance.
The above shows and describes the basic principles and main features of the present invention and the advantages of the present invention.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its Equivalent thereof.

Claims (6)

1. a kind of correlation filtering Vision Tracking based on the correction of space canonical, which comprises the steps of:
The target signature of step 1) extraction present frame target position;
Step 2) calculates response score according to target signature, positions target position and obtains target scale;
The correction of step 3) space canonical;
Step 4) model training updates, and extracts feature again in the new target position peripheral region of prediction, is input to related filter Wave frame solves a ridge regression model, obtains correlation filtering.
2. the correlation filtering Vision Tracking according to claim 1 based on the correction of space canonical, which is characterized in that institute The implementation method for stating step 1) is as follows:
The target signature of extraction is Hog feature, and Hog feature basic-cell dimensions are set as 4, and characteristic dimension is set as 31, extracts Region is 2.5 times of regions around present frame target frame position, and present frame target position and area size are using previous frame mesh Mark position location and size;
Meanwhile multi-scale transform is carried out to the target signature currently extracted, Analysis On Multi-scale Features pyramid is formed, is taken here Scale number 1 is 5, and scale factor is set as 1.01, and what change of scale was taken is that the mode of interpolation and down-sampling is realized.
3. the correlation filtering Vision Tracking according to claim 1 based on the correction of space canonical, which is characterized in that institute The implementation method for stating step 2) is as follows:
Windowization operation is done according to the Analysis On Multi-scale Features extracted in step 1) first, i.e., with the Cosine Window matrix and ruler of identical size Degree feature does dot product;
Then the Fourier feature using Fast Fourier Transform (FFT) by the Feature Conversion of the spatial domain to Fourier, after window Related operation is carried out with the correlation filter learnt in previous frame again, obtains the response score about target position;
The multiple dimensioned response thermal map about target position finally can be obtained using inverse Fourier transform, by finding response score Maximum spatial position is used as present frame target position to navigate to the scale of target's center position and best match With the prediction of scale size.
4. the correlation filtering Vision Tracking according to claim 1 based on the correction of space canonical, which is characterized in that institute Step 3) is stated using optical flow tracking method, is accomplished by
Firstly, beginning to extract the Optical-flow Feature around previous frame target, with LK light stream after activation space canonical correction condition Algorithm goes to prediction present frame target position, and and then, the position predicted by optical flow method generates one centered on the position Gauss punishes matrix;
N(ρc[1],ρc[2],ησ2,ησ2,1) (1)
Wherein, (ρc[1],ρc[2]) target location coordinate of optical flow method prediction, σ are indicated2Indicate that variance, the value of σ are set as 1.3, η Refer to correlation filtering tracking confidence coefficient, determine that η is bigger by responding score variation tendency in real time, algorithm to correlation filtering with The confidence level of track result is higher, conversely, higher to the confidence level of optical flow tracking result;η is defined as:
Wherein, τ is a penalty factor of the adjustment η to desired value, default settings 20, and ε indicates to correct threshold value, and default value is 1/2, RTIt is present frame peak response score, RpeakIt is the peak value of peak response score in past all historical frames, means here , when responding score lower than some threshold value is arrived, space canonical correction mechanism will be activated, and otherwise, no space canonical is rectified Positive mechanism, track algorithm still maintain old correlation filtering tracking frame.
5. the correlation filtering Vision Tracking according to claim 4 based on the correction of space canonical, which is characterized in that institute State optical flow tracking method correction matrix will affect in former correlation filtering tracker space canonical punishment matrix, need to be corrected Fusion, correction blending algorithm are as follows:
Wherein, N (ρ [1], ρ [2], σ22, 1) and indicate that the Gauss centered on former correlation filtering tracking position of object punishes matrix, The target location coordinate that (ρ [1], ρ [2]) i.e. former correlation filtering tracks, Crop () are a size normalised cutting behaviour Make, it is therefore an objective to make punishment matrix size consistent with characteristic size.
6. the correlation filtering Vision Tracking according to claim 1 based on the correction of space canonical, which is characterized in that institute The implementation method for stating the update of step 4) model training is as follows:
The present frame target position obtained according to step 2) redefines the feature of target area, as correlation filter training The space canonical correcting unit in feature and step 3) is demarcated as the space canonical in correlation filtering training and punishes matrix, Following loss function can be solved as a result:
Learn correlation filter f out, wherein d refers to the number in feature channel, and it is 31 here, separately that D, which refers to total feature channel number, Outside,It is the space regular terms after correction, ‖ f-ft-12It is a transient state regular terms, μ transient state is just Then coefficient and a learning parameter, setting value 15, the mode that the present invention takes ADMM to iteratively solve, so rewriteeing formula (4) It is as follows:
Wherein, g and h is the auxiliary variable for constituting Lagrangian increment in above formula, is then calculated using ADMM iteration
The above problem is become following three subproblems of solution by method can be completed solution:
Finally by continuous iteration, correlation filter can be solved, the target for next frame positions.
CN201910620063.XA 2019-07-10 2019-07-10 Correlation filtering visual tracking method based on spatial regularization correction Active CN110378932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910620063.XA CN110378932B (en) 2019-07-10 2019-07-10 Correlation filtering visual tracking method based on spatial regularization correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910620063.XA CN110378932B (en) 2019-07-10 2019-07-10 Correlation filtering visual tracking method based on spatial regularization correction

Publications (2)

Publication Number Publication Date
CN110378932A true CN110378932A (en) 2019-10-25
CN110378932B CN110378932B (en) 2023-05-12

Family

ID=68250926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910620063.XA Active CN110378932B (en) 2019-07-10 2019-07-10 Correlation filtering visual tracking method based on spatial regularization correction

Country Status (1)

Country Link
CN (1) CN110378932B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260686A (en) * 2020-01-09 2020-06-09 滨州学院 Target tracking method and system for anti-shielding multi-feature fusion of self-adaptive cosine window
CN111639815A (en) * 2020-06-02 2020-09-08 贵州电网有限责任公司 Method and system for predicting power grid defect materials through multi-model fusion
CN113409357A (en) * 2021-04-27 2021-09-17 中国电子科技集团公司第十四研究所 Correlated filtering target tracking method based on double space-time constraints

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108734723A (en) * 2018-05-11 2018-11-02 江南大学 A kind of correlation filtering method for tracking target based on adaptive weighting combination learning
CN109670410A (en) * 2018-11-29 2019-04-23 昆明理工大学 A kind of fusion based on multiple features it is long when motion target tracking method
CN109859241A (en) * 2019-01-09 2019-06-07 厦门大学 Adaptive features select and time consistency robust correlation filtering visual tracking method
CN109858415A (en) * 2019-01-21 2019-06-07 东南大学 The nuclear phase followed suitable for mobile robot pedestrian closes filtered target tracking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108734723A (en) * 2018-05-11 2018-11-02 江南大学 A kind of correlation filtering method for tracking target based on adaptive weighting combination learning
CN109670410A (en) * 2018-11-29 2019-04-23 昆明理工大学 A kind of fusion based on multiple features it is long when motion target tracking method
CN109859241A (en) * 2019-01-09 2019-06-07 厦门大学 Adaptive features select and time consistency robust correlation filtering visual tracking method
CN109858415A (en) * 2019-01-21 2019-06-07 东南大学 The nuclear phase followed suitable for mobile robot pedestrian closes filtered target tracking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FENG LI等: "Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
朱明敏等: "基于相关滤波器的长时视觉目标跟踪方法", 《计算机应用》 *
杨叶梅: "基于改进光流法的运动目标检测", 《计算机与数字工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260686A (en) * 2020-01-09 2020-06-09 滨州学院 Target tracking method and system for anti-shielding multi-feature fusion of self-adaptive cosine window
CN111260686B (en) * 2020-01-09 2023-11-10 滨州学院 Target tracking method and system for anti-shielding multi-feature fusion of self-adaptive cosine window
CN111639815A (en) * 2020-06-02 2020-09-08 贵州电网有限责任公司 Method and system for predicting power grid defect materials through multi-model fusion
CN111639815B (en) * 2020-06-02 2023-09-05 贵州电网有限责任公司 Method and system for predicting power grid defect materials through multi-model fusion
CN113409357A (en) * 2021-04-27 2021-09-17 中国电子科技集团公司第十四研究所 Correlated filtering target tracking method based on double space-time constraints
CN113409357B (en) * 2021-04-27 2023-10-31 中国电子科技集团公司第十四研究所 Correlated filtering target tracking method based on double space-time constraints

Also Published As

Publication number Publication date
CN110378932B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
Shen et al. Visual object tracking by hierarchical attention siamese network
Huang et al. Learning policies for adaptive tracking with deep feature cascades
WO2021139484A1 (en) Target tracking method and apparatus, electronic device, and storage medium
Li et al. Robust visual tracking based on convolutional features with illumination and occlusion handing
CN110378932A (en) A kind of correlation filtering Vision Tracking based on the correction of space canonical
CN109859241B (en) Adaptive feature selection and time consistency robust correlation filtering visual tracking method
CN107958479A (en) A kind of mobile terminal 3D faces augmented reality implementation method
Li et al. Online multi-expert learning for visual tracking
CN101968846A (en) Face tracking method
CN104240217B (en) Binocular camera image depth information acquisition methods and device
CN112561973A (en) Method and device for training image registration model and electronic equipment
CN105844665A (en) Method and device for tracking video object
CN105654518B (en) A kind of trace template adaptive approach
CN112258557B (en) Visual tracking method based on space attention feature aggregation
Wang et al. Hierarchical spatiotemporal context-aware correlation filters for visual tracking
CN109726675A (en) A kind of mobile robot SLAM closed loop detection method based on K CENTER ALGORITHM
CN111402303A (en) Target tracking architecture based on KFSTRCF
CN114036969A (en) 3D human body action recognition algorithm under multi-view condition
CN116580151A (en) Human body three-dimensional model construction method, electronic equipment and storage medium
Chen et al. Correlation filter tracking via distractor-aware learning and multi-anchor detection
CN108416800A (en) Method for tracking target and device, terminal, computer readable storage medium
Wang et al. Dynamic siamese network with adaptive Kalman filter for object tracking in complex scenes
Zhang et al. Learning adaptive target-and-surrounding soft mask for correlation filter based visual tracking
CN110211150A (en) A kind of real-time vision target identification method with scale coordination mechanism
CN116385482A (en) Intelligent tracking method and device for moving object facing holder camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant