CN106570893A - Rapid stable visual tracking method based on correlation filtering - Google Patents

Rapid stable visual tracking method based on correlation filtering Download PDF

Info

Publication number
CN106570893A
CN106570893A CN201610943999.2A CN201610943999A CN106570893A CN 106570893 A CN106570893 A CN 106570893A CN 201610943999 A CN201610943999 A CN 201610943999A CN 106570893 A CN106570893 A CN 106570893A
Authority
CN
China
Prior art keywords
target
model
target area
carried out
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610943999.2A
Other languages
Chinese (zh)
Inventor
安玮
胡庆拥
郭裕兰
林再平
邓新蒲
程洪玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201610943999.2A priority Critical patent/CN106570893A/en
Publication of CN106570893A publication Critical patent/CN106570893A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a rapid stable visual tracking method based on correlation filtering. On the basis of local time and space information of an image, a possible position of an object in the next frame is predicted and evaluated by utilizing convolution and frequency domain filtering related knowledge and technology, and thus, the method is characterized by being rapid, simple and robust. A sampling process is simulated approximately via cyclic shift, namely dense sampling is used, in a target area, a block cycle matrix is constructed and converted to the frequency domain to calculate all sampling information simultaneously, the complexity of an algorithm is reduced greatly, original information is not abandoned, and a higher precision is obtained.

Description

A kind of quick sane visual tracking method based on correlation filtering
Technical field
The present invention relates to a kind of quick sane visual tracking method based on correlation filtering.
Background technology
In recent years, Visual Tracking System is widely used to the fields such as video monitoring, security protection, traffic control, man-machine interaction, It is one of important means of modern management, intelligent monitoring.By Visual Tracking System, can be continuous to consecutive image sequence Carry out motion detection, extraction, identification and analyze, so as to obtain the relevant parameter of target object, such as position, speed, yardstick, track Deng, and further respective handling and behavior analysiss carried out according to tracking result, realize the behavior understanding to target or complete more Higher leveled task.
But, in existing tracking system, typically all by the way of fixed form matching or mean shift vectors To be tracked to target, when target appearance occurs acute variation (such as block, deform, rotation etc.), tracking will be easy to lose Lose.And, to realize that the real-time tracking to target needs to pay the calculation cost of costliness.
Traditional detection-tracking is all mostly to obtain positive negative sample to grader using the method for " sparse " sampling It is trained, i.e., sampling is carried out (by the use of initial target frame to around offseting several pixels) around target as training just Sample, is sampled as negative sample in wide region.However, the sample that these samplings are obtained has substantial amounts of repetition Partly (that is have some regions by double counting many times), amount of calculation is substantially increased, simultaneously as sample number Purpose is limited, and sparse sampling lost many prior informations, cause tracking accuracy to be also affected.
The content of the invention
The technical problem to be solved is, not enough for prior art, there is provided a kind of based on the fast of correlation filtering Fast sane visual tracking method.
To solve above-mentioned technical problem, the technical solution adopted in the present invention is:It is a kind of based on the quick steady of correlation filtering Strong visual tracking method, comprises the following steps:
1) (the first frame target area is manual setting, the target of subsequent frame target area to be extracted from current frame image Region is that thus front historical data is estimated to obtain) be circulated displacement, generation training sample, and build Circulant Block battle array and Gaussian returns label;
2) all training samples are carried out with feature extraction and is merged, and mould is returned based on structural risk minimization training Type;
3) multi-scale sampling is carried out to target area, scaling filter model is trained;
4) it is carried out related calculation with regression model using geo-nuclear tracin4 when a new two field picture is input into, carries out quick mesh Mark detection, determines target's center position;
5) multi-scale sampling is carried out to target centered on the target location, the multiple images block for obtaining is carried out into feature Extract and it is carried out into convolution with scaling filter, i.e. as the optimal scale of tracked target at the peak response for obtaining;
6) learning algorithm is taken to be updated regression model and scaling filter.
Step 1) the process that implements include:For the image of present frame input, first feature is carried out to target area Extract, for gray level image, extract histograms of oriented gradients and partial transformation histogram feature is merged;For cromogram Picture, then extract histograms of oriented gradients and color characteristic merged.
Step 2) the process that implements include:First target area is extracted from current frame image, then with Centered on target area, it is circulated displacement and produces training sample, block circulant matrix is built, in being then with target area The heart, builds Gaussian and returns label, is subsequently based on structural risk minimization and trains regression model using training sample.
Step 6) in, for the formula of the renewal of regression model and scaling filter model is:
The model that the model+learning rate × present frame of "current" model=previous frame is obtained;
Model in above formula refers to regression model or scaling filter model.
Learning rate can be optimized and revised, such as be set to 0.2.
Compared with prior art, the present invention have the advantage that for:The present invention is harsh like that without Kalman filtering System requirements, do not exist yet population in particle filter occur degenerate occur " sample is lean " embarrassment, it is based primarily upon image The time of local and spatial information, are likely to occur using the relevant knowledges such as convolution and frequency domain filtering and technology to target in next frame Position be predicted and assess, therefore the features such as with quick, simple, robust.The present invention to target area by following Ring displacement carrys out approximate simulation sampling process (i.e. dense sampling), builds block circulant matrix and is transformed to frequency domain to all of Sample information is calculated simultaneously, significantly reduces the complexity of algorithm, while not giving up the information of script, has been obtained more High precision.
Description of the drawings
Fig. 1 (a) is stochastical sampling;Fig. 1 (b) is dense sampling;
Target area is extracted schematic diagram by Fig. 2 (a) for present frame;Fig. 2 (b) illustrates to build block circulant matrix Figure;Fig. 2 (c) returns label schematic diagram to build Gaussian;
Fig. 3 is that multi-scale sampling is carried out to target area, trains scaling filter schematic diagram;
Fig. 4 is the flow chart for entirely tracking process.
Specific embodiment
The technical scheme is that:A kind of quick sane visual tracking method based on correlation filter, specifically includes Following step:
The first step:Feature extraction is carried out to target area and is merged.
For the image of present frame input, first feature extraction is carried out to target area, for gray level image, extract direction Histogram of gradients (HOG) and partial transformation histogram feature are merged;For coloured image, then histograms of oriented gradients is extracted And color characteristic is merged (HOG).
Second step:Target area is extracted from present frame and be circulated displacement, produce training sample, and build Circulant Block battle array And Gaussian returns label.As shown in Fig. 2 (a)~Fig. 2 (c), first target area is extracted from present frame, then with mesh Centered on mark region, it is circulated displacement and produces training sample, block circulant matrix is built, shown in such as Fig. 2 (b), then with mesh Centered on mark region, build Gaussian and return label, shown in such as Fig. 2 (c).Structural risk minimization is subsequently based on using instruction Practice sample training regression model.
3rd step:Multi-scale sampling is carried out to target area, scaling filter is trained.With target area central point as base Standard, multi-scale sampling is carried out to target as training sample, and training scaling filter makes its desired output reach at optimal scale To peak response.As shown in Figure 3.
4th step:Fast target detection is carried out when a new frame is input into, target location is determined.It is input in a new frame Afterwards, cyclic shift is carried out in extraction behind a burst of target area, Circulant Block battle array is set up, carries out being obtained most after convolution with regression model The position of target's center's skew is at big response.
5th step:Determine behind target's center position after the 4th step is completed, many chis are carried out to target centered on the position Degree sampling, carries out the multiple images block for obtaining feature extraction and it is carried out into convolution with scaling filter, and the maximum for obtaining is rung Should locate i.e. as optimal scale.
6th step:Learning algorithm is taken to be updated regression model and scaling filter.Namely:
The model that the model+learning rate × present frame of "current" model=previous frame is obtained
The basic procedure of the present invention is using present frame and its historical data training regression model and scaling filter and sharp Given a forecast in next frame with the regression model and scaling filter, obtain position and the yardstick of target, subsequently to regression model and Scaling filter is updated, and to image sequence this process is constantly repeated.Fig. 4 is the flow chart for entirely tracking process.This Bright specific embodiment is as follows:
The first step:Feature extraction is carried out to target area and is merged.
For the image of present frame input, first feature extraction is carried out to target area, for gray level image, extract direction Histogram of gradients (HOG) and partial transformation histogram feature are merged;For coloured image, then direction gradient Nogata is extracted Figure (HOG) and color characteristic are merged.
A) histograms of oriented gradients (Histogram of Oriented Gradient, HOG) is characterized in that a kind of calculating It is used for carrying out the Feature Descriptor of object detection in machine vision and image procossing.It is by calculating and statistical picture regional area Gradient orientation histogram carrys out constitutive characteristic.Concrete extracting method can be found in:Dalal N,Triggs B.Histograms of oriented gradients for human detection[C]//2005IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR'05).IEEE,2005,1: 886-893。
B) partial transformation rectangular histogram is for solving HOG features for intense light irradiation changes excessively sensitive issue, by 6 The local histogram of the luminance channel of 8 is extracted in × 6 region and office is obtained through sort transformation (rank transform) Portion converts rectangular histogram, and concrete extracting method can be found in:Zabih R,Woodfill J.Non-parametric local transforms for computing visual correspondence[C]//European conference on computer vision.Springer Berlin Heidelberg,1994:151-158。
C) colouring information lost during color characteristic is for making up HOG characteristic extraction procedures.By Van De Weijer J,Schmid C,Verbeek J,et al.Learning color names for real-world applications [J].IEEE Transactions on Image Processing,2009,18(7):Mapping in 1512-1523, can be by
The value of RGB is converted to the probability distribution value of one 11 dimension semanteme color.
Second step:Target area is extracted from present frame and be circulated displacement, produce training sample, and build Circulant Block battle array And Gaussian returns label.First target area is extracted from present frame, then centered on target area, be circulated Displacement produces training sample, builds block circulant matrix, then centered on target area, builds Gaussian and returns label.With Afterwards regression model is trained using training sample based on structural risk minimization.
First, with target's center as origin, according toThe Gaussian that structure is gradually reduced to the periphery from origin Label is returned, and peak point is moved to into the upper left angle point of target area.It is then based on structural risk minimization to train back Return model, the target of training is exactly to find decision function f (z) of an optimum=wTZ causes structuring risk minimization, I.e.:
Wherein λ is regular terms, is responsible for control over-fitting, xiFor training sample, yiFor output result.Its w has closed solutions.Such as Under:
W=(XHX+λI)-1XTy
Wherein X is data matrix, and per a line a training sample x is representedi, I is unit matrix.Such w can be to solve for Need to carry out matrix inversion operation, it usually needs solve a series of system of linear equations, its computation complexity is ο (n3).However, In the application for calculating in real time is needed, direct solution often fails because amount of calculation is too big.But if data matrix X is to divide Block cyclic matrix, i.e., when X has following form:
Each of which row represents a training sample, representated by the first row x to generate sample, all of sample below Can be by obtaining to x cyclic shifts.Such Circulant Block battle array X has many excellent properties, most important of which It is that X can be decomposed according to following form:
Wherein F is not dependent on the scalar matrix of X,For the discrete Fourier transform of vector xThus, again will This formula substitutes into above formula and can obtain
Because FHF=I, and diagonal matrix multiplication can be written as the dot product of corresponding element on diagonal, be represented with ⊙, then above formula Can be written as:
To here it was found that being in bracketAuto-correlation, namely the power spectrum of signal x, by this formula generation
Wherein ⊙ represents dot product, subsequently only needs to carry out again the value that inverse discrete Fourier transform is obtained w, as Contrast, using circular matrix frequency-domain calculations computation complexity be o (nlogn), compared to the o (n of former formula3), calculate complicated Degree is substantially reduced, and thus tracking velocity is also obviously improved.
● nonlinear regression
For nonlinear regression, a kind of common means are to use kernel method, will be linear in lower dimensional space using kernel function Inseparable mode map to higher dimensional space makes its linear separability.Kernel method is broadly divided into following components:
W is expressed as into a series of linear combination of samples, has thus been converted to the solution to w in luv space To the solution of α in dual spaces
So, decision function can be expressed as
Simultaneously for the calculating of kernel function, we can be written as
The advantage of kernel function be can by the linear separability that is transformed to of script linearly inseparable, but this also exactly it Shortcoming is located, because project to computation complexity after higher dimensional space exponentially rising, it is easy to cause " dimension disaster ", this is also All the time kernel function is in many local confined reasons.
For " ridge regression ", the closed solutions of its coring version are
α=(K+ λ I)-1y
K represents Nuclear Data matrix, and α represents all αiThe vector of composition, namely the target solution in dual spaces.We send out Existing, when K meets the property of circular matrix, above formula can equally be carried out diagonalization and process and obtain one as linear feelings by us Quick solution under condition.As described in theorem 1, as long as meeting following condition, you can guarantee K is circular matrix.
Theorem 1:For arbitrary loop-around data Matrix C (x), as long as arbitrary permutation matrix M, kernel function all meets κ (x, x ')=κ (Mx, Mx '), then corresponding Nuclear Data matrix also meets loop structure.
This also implies that, to make the data matrix after Kernel Function Transformation still have loop structure, kernel function pair In the conversion of the data of different dimensions should be of equal value.So, following kernel function meets condition:
● Radial basis kernel function (such as gaussian kernel)
● dot product kernel function (such as linear kernel, polynomial kernel)
● intersect kernel function
Below, we carry out abbreviation, first we by K by C (kxx) replace, wherein kxxRepresent the first row in K.By This, above formula is changed into:
Further abbreviation can be obtained:
WhereinThe core auto-correlation for generating vector x is represented, whereinFrom above formula We generate vectorial discrete Fourier transform it is found that we only need to be calculated in K, and its amount of calculation is linear with sample number Growth, rather than need as traditional method solve n × n Nuclear Data inverse of a matrix, thus greatly reduce calculate Amount.
3rd step:Multi-scale sampling is carried out to target area, scaling filter is trained.With target area central point as base Standard, multi-scale sampling is carried out to target as training sample, and training scaling filter makes its desired output reach at optimal scale To peak response.
The core concept of size estimation algorithm is to train a scaling filter by training sample so that input picture Carry out after convolution with it, output is with 2 precipitous dimension Gaussian-shapeds, and its peak value is the expectation yardstick of target, then Just target scale estimation can be carried out by the peak in searching output image in subsequent frame.
First, we build a scaling filter by minimizing loss function, and its loss function is:
WhereinFor scaling filter, flFor the l dimensional features that image block is extracted, d is the dimension of feature, and g is expectation Output, is calculated by minimizing ε and being transformed into frequency domain, and obtaining its optimal solution (i.e. scaling filter) is:
Wherein, G and F represent respectively the Fourier transformation of desired output and feature,*Represent corresponding conjugate.
4th step:Fast target detection is carried out when a new frame is input into, target location is determined.It is input in a new frame Afterwards, extracting behind previous frame target area carries out cyclic shift, sets up Circulant Block battle array, carries out being obtained most after convolution with regression model The position of target's center's skew is at big response.
Extracting first behind previous frame target area carries out cyclic shift, sets up Circulant Block battle array, and by itself and regression model Carry out the Nuclear Data matrix obtained by convolution, we also it can easily be proven that it meets the requirement of theorem 1, so Nuclear Data now Matrix can be written as:
Kz=C (kxz)
So, for decision function f (z)=(Kz)Tα, wherein containing the nuclear phase of all of cyclic shift and regression model Pass is worth, and we can be by being quickly calculated:
So for each two field picture for subsequently inputting, we can be used for quickly detecting to it, choose f (z) and get most Big value as new position.
5th step:Determine behind target's center position after the 4th step is completed, many chis are carried out to target centered on the position Degree sampling, carries out the multiple images block for obtaining feature extraction and it is carried out into convolution with scaling filter, and the maximum for obtaining is rung Should locate i.e. as optimal scale.
Using new calculated position as center, in the region multi-scale sampling is carried out, if target area size be m × N, scale factor a=1.02, the stepping factor is p ∈ { -16, -15, -14....14,15,16 }, 33 layers altogether, and every layer of yardstick is apm×apn.Feature is extracted to the image block of different scale, and it is carried out into convolution with scaling filter, meet with a response maximum Place is used as optimal scale.
Wherein Γ, Ω represent respectively the molecule denominator of scaling filter, the characteristic vector of Z representative image blocks.
6th step:Learning algorithm is taken to be updated regression model and scaling filter.
Regression model is updated to:
Scaling filter is filtered device and updates using learning algorithm, and the expression formula of its molecule denominator is respectively:
In order to verify the effectiveness of this method, have chosen from standard-track data set first 10 video sequences with it is current 14 kinds of algorithms of main flow carry out contrast test, and the index of selection is mean center error (CLE) peace coincidence factor (OR), averagely Errors of centration is that method takes per frame target's center position by calculated with the Euclidean distance of real target's center position Averagely, average coincidence factor is and, by calculated every frame tracking box result and the coincidence factor of real target following frame, calculates Formula is
RtRepresent calculated tracking box result, RgRepresent calculated real tracking box.
Mean center error (CLE) result of each main flow algorithm of table 1 on ten video sequences
Coincidence factor (OR) result of each main flow algorithm of table 2 on ten video sequences
Compared to methods such as traditional CT, KCF target is tracked using the tracking box of fixed size, this method increases Size estimation module, can the adaptive dimensional variation to target estimate, so have bright in coincidence factor index Aobvious lifting.During simultaneously for being tracked to target using fixed size tracking box, when target scale is varied widely Model will be made progressively to degenerate and ultimately result in tracking failure (such as when target scale become greatly and tracking box will lose when constant A large amount of target informations, when target scale diminishes and tracking box will cause comprising a large amount of background informations when constant).
Because tracking problem is often very sensitive to initializing, the small variations of initial specified tracking box may be to tracking Effect produces strong impact, it is therefore necessary to which the robustness of algorithm is tested.Here we continue to continue to use standard with The interpretational criteria mentioned in track data set, i.e., once by the test of test, time robustness and space robustness.
■ is once by testing (OPE):51 video sequences during algorithm is directly allowed sequentially through data set are carried out Test.
■ time robustness tests (TRE):Choose 20 i.e. in each video sequence not open as algorithm in the same time The initial position of beginning is tested.
■ spaces robustness test (SRE):The initial tracking box of each video sequence is carried out after small sample perturbations again Algorithm is allowed to be tested (including 4 kinds of dimensional variation and 8 kinds of displacements).
Evaluation index is defined as using prediction curve and success rate curve, prediction curve:A threshold value is given, center is calculated Error accounts for the percentage ratio of totalframes less than the frame number of this threshold value, for convenience ranking, and we take hundred when threshold value=20 Divide the foundation for being used for ranking.Success rate curve is similar to prediction curve, that is, calculate when Duplication is more than shared by the frame number of threshold value Totalframes percentage ratio, according to area under curve as ranking foundation.
This method achieves good effect in time and the test of space robustness, illustrates our method to initial Change and there is good robustness.Traditional tracking is complex due to Computational frame, can only adopt the little feature of amount of calculation, Such as Harr, color characteristic etc., but the sign ability of these features is all weaker, when target is more similar to background, easily Cause tracking failure.And this method make use of the property of circular matrix, computation complexity is low, it is possible to take multiple features fusion Mode, greatly improved tracking accuracy and robustness.

Claims (5)

1. a kind of quick sane visual tracking method based on correlation filtering, it is characterised in that comprise the following steps:
1) target area is extracted from current frame image, displacement is circulated to the target area, produce training sample, and structure Build Circulant Block battle array and Gaussian returns label;
2) all training samples are carried out with feature extraction and is merged;
3) based on structural risk minimization, on the basis of Gaussian returns label regression model is trained;
4) multi-scale sampling is carried out to target area, scaling filter model is trained;
5) it is carried out related calculation with regression model using geo-nuclear tracin4 when a new two field picture is input into, carries out fast target inspection Survey, determine target's center position;
6) multi-scale sampling is carried out to target centered on the target location, the multiple images block for obtaining is carried out into feature and is carried Take, and the feature of extraction and scaling filter model are carried out into convolution, i.e. as tracked target at the peak response for obtaining Optimal scale;
7) learning algorithm is taken to be updated regression model and scaling filter model.
2. the quick sane visual tracking method based on correlation filtering according to claim 1, it is characterised in that step 2) The process that implements include:For the image of present frame input, first feature extraction is carried out to target area, for gray-scale maps Picture, extracts histograms of oriented gradients and partial transformation histogram feature is merged;For coloured image, then direction gradient is extracted Rectangular histogram and color characteristic are merged.
3. the quick sane visual tracking method based on correlation filtering according to claim 1, it is characterised in that step 3) The process that implements include:First target area is extracted from current frame image, then centered on target area, It is circulated displacement and produces training sample, build block circulant matrix, then centered on target area, builds Gaussian and return Label, is subsequently based on structural risk minimization, and using training sample regression model is trained.
4. the quick sane visual tracking method based on correlation filtering according to claim 1, it is characterised in that step 6) In, it is to the formula of the renewal of regression model and scaling filter model:
The model that the model+learning rate × present frame of "current" model=previous frame is obtained;
Model in above formula refers to regression model or scaling filter model.
5. the quick sane visual tracking method based on correlation filtering according to claim 4, it is characterised in that study speed The adjustable optimization of rate, such as be set to 0.2.
CN201610943999.2A 2016-11-02 2016-11-02 Rapid stable visual tracking method based on correlation filtering Pending CN106570893A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610943999.2A CN106570893A (en) 2016-11-02 2016-11-02 Rapid stable visual tracking method based on correlation filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610943999.2A CN106570893A (en) 2016-11-02 2016-11-02 Rapid stable visual tracking method based on correlation filtering

Publications (1)

Publication Number Publication Date
CN106570893A true CN106570893A (en) 2017-04-19

Family

ID=58536498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610943999.2A Pending CN106570893A (en) 2016-11-02 2016-11-02 Rapid stable visual tracking method based on correlation filtering

Country Status (1)

Country Link
CN (1) CN106570893A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240122A (en) * 2017-06-15 2017-10-10 国家新闻出版广电总局广播科学研究院 Video target tracking method based on space and time continuous correlation filtering
CN107358623A (en) * 2017-07-12 2017-11-17 武汉大学 A kind of correlation filtering track algorithm based on conspicuousness detection and robustness size estimation
CN107424177A (en) * 2017-08-11 2017-12-01 哈尔滨工业大学(威海) Positioning amendment long-range track algorithm based on serial correlation wave filter
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN107590820A (en) * 2017-08-25 2018-01-16 北京飞搜科技有限公司 A kind of object video method for tracing and its intelligent apparatus based on correlation filtering
CN107748873A (en) * 2017-10-31 2018-03-02 河北工业大学 A kind of multimodal method for tracking target for merging background information
CN108257153A (en) * 2017-12-29 2018-07-06 中国电子科技集团公司第二十七研究所 A kind of method for tracking target based on direction gradient statistical nature
CN108805909A (en) * 2018-04-28 2018-11-13 哈尔滨工业大学深圳研究生院 Method for tracking target based on particle filter re-detection under correlation filtering frame
CN108830219A (en) * 2018-06-15 2018-11-16 北京小米移动软件有限公司 Method for tracking target, device and storage medium based on human-computer interaction
CN109410246A (en) * 2018-09-25 2019-03-01 深圳市中科视讯智能系统技术有限公司 The method and device of vision tracking based on correlation filtering
CN109544604A (en) * 2018-11-28 2019-03-29 天津工业大学 Method for tracking target based on cognition network
CN109753846A (en) * 2017-11-03 2019-05-14 北京深鉴智能科技有限公司 Target following system for implementing hardware and method
CN109934042A (en) * 2017-12-15 2019-06-25 吉林大学 Adaptive video object behavior trajectory analysis method based on convolutional neural networks
CN109978923A (en) * 2019-04-04 2019-07-05 杭州电子科技大学 One kind being based on double-template dimension self-adaption correlation filtering method for tracking target and system
CN110276782A (en) * 2018-07-09 2019-09-24 西北工业大学 A kind of EO-1 hyperion method for tracking target of combination sky spectrum signature and correlation filtering
CN110827319A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Improved Staple target tracking method based on local sensitive histogram
CN111862145A (en) * 2019-04-24 2020-10-30 四川大学 Target tracking method based on multi-scale pedestrian detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100322534A1 (en) * 2009-06-09 2010-12-23 Colorado State University Research Foundation Optimized correlation filters for signal processing
CN104200237A (en) * 2014-08-22 2014-12-10 浙江生辉照明有限公司 High speed automatic multi-target tracking method based on coring relevant filtering
CN104574445A (en) * 2015-01-23 2015-04-29 北京航空航天大学 Target tracking method and device
CN105741316A (en) * 2016-01-20 2016-07-06 西北工业大学 Robust target tracking method based on deep learning and multi-scale correlation filtering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100322534A1 (en) * 2009-06-09 2010-12-23 Colorado State University Research Foundation Optimized correlation filters for signal processing
CN104200237A (en) * 2014-08-22 2014-12-10 浙江生辉照明有限公司 High speed automatic multi-target tracking method based on coring relevant filtering
CN104574445A (en) * 2015-01-23 2015-04-29 北京航空航天大学 Target tracking method and device
CN105741316A (en) * 2016-01-20 2016-07-06 西北工业大学 Robust target tracking method based on deep learning and multi-scale correlation filtering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵璐璐: ""基于相关滤波的目标跟踪算法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240122A (en) * 2017-06-15 2017-10-10 国家新闻出版广电总局广播科学研究院 Video target tracking method based on space and time continuous correlation filtering
CN107358623A (en) * 2017-07-12 2017-11-17 武汉大学 A kind of correlation filtering track algorithm based on conspicuousness detection and robustness size estimation
CN107358623B (en) * 2017-07-12 2020-01-07 武汉大学 Relevant filtering tracking method based on significance detection and robustness scale estimation
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN107424177A (en) * 2017-08-11 2017-12-01 哈尔滨工业大学(威海) Positioning amendment long-range track algorithm based on serial correlation wave filter
CN107424177B (en) * 2017-08-11 2021-10-26 哈尔滨工业大学(威海) Positioning correction long-range tracking method based on continuous correlation filter
CN107590820B (en) * 2017-08-25 2020-06-02 兰州飞搜信息科技有限公司 Video object tracking method based on correlation filtering and intelligent device thereof
CN107590820A (en) * 2017-08-25 2018-01-16 北京飞搜科技有限公司 A kind of object video method for tracing and its intelligent apparatus based on correlation filtering
CN107748873A (en) * 2017-10-31 2018-03-02 河北工业大学 A kind of multimodal method for tracking target for merging background information
CN107748873B (en) * 2017-10-31 2019-11-26 河北工业大学 A kind of multimodal method for tracking target merging background information
US10810746B2 (en) 2017-11-03 2020-10-20 Xilinx Technology Beijing Limited Target tracking hardware implementation system and method
CN109753846A (en) * 2017-11-03 2019-05-14 北京深鉴智能科技有限公司 Target following system for implementing hardware and method
CN109934042A (en) * 2017-12-15 2019-06-25 吉林大学 Adaptive video object behavior trajectory analysis method based on convolutional neural networks
CN108257153B (en) * 2017-12-29 2021-09-07 中国电子科技集团公司第二十七研究所 Target tracking method based on direction gradient statistical characteristics
CN108257153A (en) * 2017-12-29 2018-07-06 中国电子科技集团公司第二十七研究所 A kind of method for tracking target based on direction gradient statistical nature
CN108805909B (en) * 2018-04-28 2022-02-11 哈尔滨工业大学深圳研究生院 Target tracking method based on particle filter redetection under related filter framework
CN108805909A (en) * 2018-04-28 2018-11-13 哈尔滨工业大学深圳研究生院 Method for tracking target based on particle filter re-detection under correlation filtering frame
CN108830219A (en) * 2018-06-15 2018-11-16 北京小米移动软件有限公司 Method for tracking target, device and storage medium based on human-computer interaction
CN110276782A (en) * 2018-07-09 2019-09-24 西北工业大学 A kind of EO-1 hyperion method for tracking target of combination sky spectrum signature and correlation filtering
CN110276782B (en) * 2018-07-09 2022-03-11 西北工业大学 Hyperspectral target tracking method combining spatial spectral features and related filtering
CN110827319A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Improved Staple target tracking method based on local sensitive histogram
CN110827319B (en) * 2018-08-13 2022-10-28 中国科学院长春光学精密机械与物理研究所 Improved Staple target tracking method based on local sensitive histogram
CN109410246B (en) * 2018-09-25 2021-06-11 杭州视语智能视觉系统技术有限公司 Visual tracking method and device based on correlation filtering
CN109410246A (en) * 2018-09-25 2019-03-01 深圳市中科视讯智能系统技术有限公司 The method and device of vision tracking based on correlation filtering
CN109544604A (en) * 2018-11-28 2019-03-29 天津工业大学 Method for tracking target based on cognition network
CN109544604B (en) * 2018-11-28 2023-12-01 深圳拓扑视通科技有限公司 Target tracking method based on cognitive network
CN109978923A (en) * 2019-04-04 2019-07-05 杭州电子科技大学 One kind being based on double-template dimension self-adaption correlation filtering method for tracking target and system
CN111862145A (en) * 2019-04-24 2020-10-30 四川大学 Target tracking method based on multi-scale pedestrian detection

Similar Documents

Publication Publication Date Title
CN106570893A (en) Rapid stable visual tracking method based on correlation filtering
CN106778595B (en) Method for detecting abnormal behaviors in crowd based on Gaussian mixture model
Liu et al. Detection of multiclass objects in optical remote sensing images
CN106952288B (en) Based on convolution feature and global search detect it is long when block robust tracking method
CN112184752A (en) Video target tracking method based on pyramid convolution
CN107633226B (en) Human body motion tracking feature processing method
CN109977971A (en) Dimension self-adaption Target Tracking System based on mean shift Yu core correlation filtering
CN109743642B (en) Video abstract generation method based on hierarchical recurrent neural network
CN107452022A (en) A kind of video target tracking method
CN109523013A (en) A kind of air particle pollution level estimation method based on shallow-layer convolutional neural networks
CN108399430B (en) A kind of SAR image Ship Target Detection method based on super-pixel and random forest
CN104657717A (en) Pedestrian detection method based on layered kernel sparse representation
CN110245587B (en) Optical remote sensing image target detection method based on Bayesian transfer learning
CN113988147A (en) Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device
CN113963333B (en) Traffic sign board detection method based on improved YOLOF model
CN110472607A (en) A kind of ship tracking method and system
Lin et al. Optimal CNN-based semantic segmentation model of cutting slope images
CN112991394B (en) KCF target tracking method based on cubic spline interpolation and Markov chain
CN111242003B (en) Video salient object detection method based on multi-scale constrained self-attention mechanism
Heryadi et al. The effect of resnet model as feature extractor network to performance of DeepLabV3 model for semantic satellite image segmentation
Baoyuan et al. Research on object detection method based on FF-YOLO for complex scenes
Cai et al. A target tracking method based on KCF for omnidirectional vision
Huang et al. Drone-based car counting via density map learning
CN116188943A (en) Solar radio spectrum burst information detection method and device
CN113537240B (en) Deformation zone intelligent extraction method and system based on radar sequence image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170419