CN107341479A - Target tracking method based on weighted sparse cooperation model - Google Patents
Target tracking method based on weighted sparse cooperation model Download PDFInfo
- Publication number
- CN107341479A CN107341479A CN201710562703.7A CN201710562703A CN107341479A CN 107341479 A CN107341479 A CN 107341479A CN 201710562703 A CN201710562703 A CN 201710562703A CN 107341479 A CN107341479 A CN 107341479A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- formula
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 81
- 238000004519 manufacturing process Methods 0.000 claims abstract description 30
- 239000002245 particle Substances 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 14
- 230000017105 transposition Effects 0.000 claims description 9
- 230000006978 adaptation Effects 0.000 claims description 6
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 241001269238 Data Species 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 101000911390 Homo sapiens Coagulation factor VIII Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 102000057593 human F8 Human genes 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 229940047431 recombinate Drugs 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2136—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a target tracking method based on a weighted sparse cooperation model, which comprises the following steps: 1, selecting a tracked target from first frame image data of a segment video sequence; 2 initializing any frame f as 1; 3, processing the f frame image data in the video sequence by using a particle filter algorithm to obtain a plurality of target candidate frames; 4, processing a target candidate frame in the f frame image data by using a discriminant algorithm represented by weighted sparseness to obtain discriminant scores of the target candidate frame; 5, processing a target candidate frame in the f frame image data by using a production algorithm represented by weighted sparsity to obtain a production score of the target candidate frame; 6, obtaining the final score of the candidate frame; and comparing the scores of all the candidate frames, and finding out the candidate frame corresponding to the maximum value as the tracking result. The invention can carry out real-time motion estimation and positioning on the moving target in a video sequence, thereby realizing the stable tracking of the moving target.
Description
Technical field
The invention belongs to field of video monitoring, it is proposed that a kind of method for tracking target based on the sparse coordination model of weighting,
The tracking stablized to the moving target in a video sequence.
Background technology
In recent years, target following plays very important role in computer vision field, while is also its research heat
Point.With the continuous development of target following technology, it all plays vital effect in various actual application,
Such as:A variety of actual scenes such as video monitoring, man-machine interaction, behavioural analysis, virtual reality, automatic control system.Therefore, it is various
Method for tracking target arises at the historic moment, and mainly has the method for tracking target based on production model, also has based on discriminative model
Method for tracking target.
Production model be on the basis of target detection, after carrying out apparent modeling to foreground target, according to it is certain with
Track strategy estimation tracking target optimal location, discriminate track side's rule be by each two field picture carry out target detection come
Obtain tracking dbjective state.But do not consider in based on sparse production model and based on sparse discriminative model
Correlation between test sample and dictionary atom (namely training sample), such that the sparse coding coefficient arrived is not
It is enough accurate, so as to have impact on the precision of tracking.
In order to improve the tracking accuracy of moving target, researcher uses the tracking of rarefaction representation mixed model, comprehensive
Conjunction make use of global template and local expression, can efficiently handle the apparent change of target, but mixed model is solving coefficient volume
During code coefficient, what obtained coefficient sparse coding can not enough is sparse, so that the positioning to moving target is not accurate enough, nothing
Method is realized to be tracked to the long-time stable of target.
The content of the invention
To solve the above problems, the present invention proposes a kind of method for tracking target based on the sparse coordination model of weighting, with
Phase can make full use of the distribution character and local message of sample, and the correlation between sample is incorporated into sparse coordination model
In, real-time motion estimation and positioning are carried out to the moving target in a certain screen monitoring scene so as to realize, ensure length
Time tenacious tracking.
The present invention adopts the following technical scheme that to solve technical problem:
A kind of present invention based on the characteristics of method for tracking target for weighting sparse coordination model is to carry out as follows:
One section of step 1, selection frame number are fmaxVideo sequence, and choose from the first frame image data the target of tracking;
Step 2, any one frame of definition are f, and initialize f=1;
Step 3, using particle filter algorithm f frame image datas in the video sequence are handled, obtain kmaxIt is individual
Target candidate frame, and each target candidate frame is normalized;
Step 4, using weighting the discriminate algorithm of rarefaction representation to the kth after being normalized in the f frame image datasf
Individual target candidate frame is handled, and obtains kthfThe discriminate scoring of individual target candidate frame
Step 4.1, judge whether f=1 sets up, if so, after f+1 then is assigned into f, return to step 3;Otherwise, perform
Step 4.2;
Step 4.2, choose n around the tracking result of f-1 frame image datas1Individual positive templateAnd n2It is individual
Negative norm plateAnd have Represent i-th of training sample in positive template
This,I-th of training sample in negative norm plate is represented, andQ is the vector magnitude of the target candidate frame;By institute
State n1Individual positive template A+And n2Individual negative norm plate A-The dictionary of discriminative model is remerged into after normalizedAnd
N=n1+n2;
Step 4.3, using the dictionary A of the discriminative model as the input of discriminate algorithm, to kthfIndividual target candidate
Frame is handled, and obtains kthfGlobal optimum's sparse coefficient of individual target candidate frame
Step 4.4, global optimum's sparse coefficient α is split as positive vectorAnd negative vectorAnd
Calculate positive reconstructed errorWith negative reconstructed errorX is kthfIndividual target candidate frame institute is right
The vector answered;
Step 4.5, using formula (1) obtain kthfIndividual target candidate frame discriminate scoring
In formula (1), σ is adjustment parameter;
Step 5, using weighting the Productive Algorithm of rarefaction representation to kth in f-th of frame image datafIndividual target is waited
Select frame to be handled, obtain the kthfThe production scoring of individual target candidate frame
F-th step 5.1, input of frame image data;
Step 5.2, the view data of input is divided into M fritter, and the gray value of each fritter is converted to one
Individual gray value vectors, vector magnitude p;
Step 5.3, using K-means clustering algorithms to the fritter where target is tracked in first frame image data
Gray value vectors are handled, and obtain the dictionary of production modelC is the number of cluster centre;
Step 5.4, using sliding window by the kthfIndividual target candidate frame is divided into m image fritter;
Step 5.5, using the dictionary D of the production model as the input of the Productive Algorithm, to the kthfIt is individual
I-th of image fritter y in target candidate frameiHandled, obtain i-th of image fritter yiGlobal optimum's sparse coding coefficientSo as to obtain global optimum's sparse coding coefficient of m image fritter, 1≤i≤m;
Step 5.6, using formula (2) global optimum's sparse coding coefficient of m image fritter is stitched together, obtained described
KthfThe sparse coding coefficient of individual target candidate frame
In formula (2), T is transposition;
Step 5.7, according to the kthfThe sparse coding coefficient of individual target candidate frameInstitute is calculated using formula (3)
State kthfThe production scoring of individual target candidate frame
In formula (3),Represent the sparse coding coefficientIn j-th of element value;Represent feature templatesIn j-th
Element value, the feature templatesIt is that the tracking target frame chosen in first frame image data is entered by step 5.4- steps 5.6
The sparse coding coefficient that row processing obtains;
Step 6, according to the kthfThe discriminate scoring of individual target candidate frameScored with productionUtilize formula (4)
Obtain kthfThe final score of individual target candidate frame
Step 7, judge kf< kmaxWhether set up, if so, then by kf+ 1 is assigned to kfAfterwards, return to step 4 performs, so as to
Obtain kmaxThe final score of individual target candidate frame, from kmaxMaximum is chosen in the final score of individual target candidate frame as f
The tracking result of individual frame image data;Otherwise, after f+1 being assigned into f, return to step 3 performs, until f > fmaxUntill, so as to
Realize the tracking of moving target.
It is of the present invention to be lain also in based on the characteristics of method for tracking target for weighting sparse coordination model, the step
4.3 be to carry out according to the following procedure:
Step 4.3.1, the target formula of discriminate algorithm is defined using formula (5):
In formula (5),For the weight matrix of discriminate algorithm;λ1For the adjustment parameter of discriminate algorithm;
Step 4.3.2, initialize discriminate algorithm iterations t=0, random initializtion discriminate algorithm it is initial dilute
Sparse coefficient isAnd obtain the weight matrix diag (W of diagonalization using formula (6)1):
diag(W1)=[dist (x, A1),dist(x,A2),…dist(x,Aj),…,dist(x,An)] (6)
In formula (6), AjJth column element value in the dictionary A of the discriminative model is represented, 1≤j≤n, n are discriminate moulds
The columns of the dictionary of type, dist (x, Aj) represent kthfThe vectorial and word of the discriminative model corresponding to individual target candidate frame
Euclidean distance in allusion quotation A between jth column element, and have:dist(x,Aj)=| | x-Aj||s, s is adaptation parameter, for adjusting power
The size of value;
Step 4.3.3, the diagonal matrix of the discriminate algorithm of the t times iteration is obtained using formula (7)
In formula (7),For the sparse coefficient α of the t times iteration(t)In n-th of sparse coefficient value;
Step 4.3.4, the sparse coefficient α of the discriminate algorithm of the t+1 times iteration is obtained using formula (8)(t+1):
In formula (8), T is transposition;
Step 4.3.5, judge | | α(t+1)-α(t)| | whether < ε set up, wherein, ε is set coefficient threshold residual value, or
Person, t > tmaxWhether set up, wherein, tmaxFor the maximum iteration of discriminate algorithm, if so, then represent described the t times repeatedly
The sparse coefficient α in generation(t+1)As kthfThe global optimum sparse coefficient α of individual target candidate frame;Otherwise, t+1 is assigned to t, and
Return to step 4.4.3 is performed.
The step 5.4 is to carry out according to the following procedure:
Step 5.4.1, the target formula of Productive Algorithm is defined using formula (9):
In formula (9),For the weight matrix of Productive Algorithm;λ2For the adjustment parameter of Productive Algorithm;
Step 5.4.2, iterations t '=0 of initialization Productive Algorithm, i-th of random initializtion Productive Algorithm
Image fritter yiInitial sparse coefficient beAnd obtain the weight matrix diag (W of diagonalization using formula (10)2):
diag(W2)=[dist (yi,D1),dist(yi,D2),…dist(yi,Dj),…,dist(yi,Dc)] (10)
In formula (10), DjJth column element value in the dictionary D of the production model is represented, 1≤j≤c, c are the clusters
The number at center;dist(yi,Dj) represent i-th of image fritter yiWith jth column element value in the dictionary D of the production model
Between Euclidean distance, and have:dist(yi,Dj)=| | yi-Dj||s, s is adaptation parameter, for adjusting the size of weights;
Step 5.4.3, the diagonal matrix of the Productive Algorithm of the secondary iteration of t ' is obtained using formula (11)
In formula (11),For i-th of image fritter y of the secondary iteration of t 'iSparse coefficientIn c-th of sparse coefficient
Value;
Step 5.4.4, i-th of image fritter y of+1 iteration of t ' is obtained using formula (12)iSparse coefficient
In formula (12), T is transposition;
Step 5.3.4, judgeWhether set up, wherein, ε ' is another set coefficient residual error threshold
Value, or, t ' > t 'maxWhether set up, wherein, t 'maxFor the maximum iteration of Productive Algorithm, if so, then represent institute
State i-th of image fritter y of+1 iteration of t 'iSparse coefficientAs i-th of image fritter yiGlobal optimum it is dilute
Dredge code coefficient βi;Otherwise, t '+1 is assigned to t ', and return to step 5.4.3 is performed.
Compared with prior art, the beneficial effects of the present invention are:
1st, the present invention is by using computer vision and the algorithm of area of pattern recognition, including rarefaction representation algorithm, particle
Filtering algorithm, and Sparse expression is carried out to target candidate frame using super complete dictionary (training set), so can be preferable
Ground solves Target Tracking Problem, so as to improve the effect of the motion target tracking of complex scene.
2nd, based on the present invention identifies the target tracking algorism of the sparse coordination model in the algorithm in field in mode, by one
The sparse discriminate sorting algorithm of individual weighting cooperates with a Productive Algorithm based on weighting rarefaction representation, obtains one
Target following model with robustness, it can be very good to handle fuzzy, change of scale, appearance that target occurs in motion process
The problems such as state changes.
3rd, the present invention is in weighting sparse discriminate algorithm and weighting sparse Productive Algorithm, by sparse coefficient
Plus corresponding weights, the distribution character and local message of sample can be made full use of, the correlation between sample is incorporated into
In sparse coordination model, allow those to contribute in reconstruct with the small training sample of target candidate frame correlation less, allow correlation
Property the reconstruct contribution to target candidate frame of big training sample it is bigger, the scoring of target candidate frame is more credible obtained from;
4th, the present invention employs a kind of new iterative algorithm to solve the dilute of global optimum when solving coefficient sparse coding
Sparse coefficient so that the sparse coefficient for solving to obtain is more sparse and reasonable, can preferably utilize super complete dictionary (training set)
Sparse expression is carried out to target candidate frame, so as to which the score calculation to target candidate frame is more accurate, target positions more
Accurately.
Brief description of the drawings
Fig. 1 is inventive algorithm schematic flow sheet;
Embodiment
In the present embodiment, a kind of video frequency motion target track algorithm for weighting sparse cooperation is selecting video sequence first,
Obtain the first frame image data and initially track target;If then the frame of video application particle filter algorithm currently inputted is obtained
Dry target candidate frame;The scoring of target candidate frame discriminate is obtained to the discriminate algorithm of candidate frame application weighting rarefaction representation;
To selecting frame to obtain several image fritters with sliding window, the Productive Algorithm sparse to these image fritter application weightings obtains
Target candidate frame production scores;Discriminate scoring is scored with the candidate frame that production scoring is multiplied to the end;Finally
Compare the size of all candidate frames scoring of present incoming frame, candidate frame corresponding to maximizing is tracking result.Specifically
Ground is said, as shown in figure 1, being to carry out as follows:
One section of step 1, selection frame number are fmaxVideo sequence, and choose from the first frame image data the target of tracking;
Step 2, any one frame of definition are f, and initialize f=1;
Step 3, using particle filter algorithm f frame image datas in video sequence are handled, obtain kmaxIndividual target
Candidate frame, and each target candidate frame is normalized;Particle filter algorithm is to represent probability using particle collection, can
So that with any type of state-space model, its core concept is the stochastic regime particle by being extracted from posterior probability
To represent its distribution situation, it is a kind of order importance sampling method, is occupied an important position in target following.
Step 4, using weighting the discriminate algorithm of rarefaction representation to the kth after being normalized in f frame image datasfIndividual mesh
Mark candidate frame is handled, and obtains kthfThe discriminate scoring of individual target candidate frame
Step 4.1, judge whether f=1 sets up, if so, after f+1 then is assigned into f, return to step 3;Otherwise, perform
Step 4.2;
Step 4.2, choose n around the tracking result of f-1 frame image datas1Individual positive templateAnd n2It is individual
Negative norm plateAnd haveIn the present embodiment, positive template quantity is 50,
That is n1=50, it is 200 to bear template number, i.e. n2=200;I-th of training sample in positive template is represented,Represent in negative norm plate
I-th of training sample, andQ is the vector magnitude of target candidate frame;By n1Individual positive template A+And n2Individual negative norm plate
A-The dictionary of discriminative model is remerged into after normalizedAnd n=n1+n2;Positive and negative template corresponds to respectively
Coefficient vectorForm last sparse coefficient vectorThe number of training sample is n.Make x
For a candidate frame around tracking targetTracking target subspace and the line of background subspace so on x
Property combination can be expressed as:
In formula (1),Represent positive template coefficient vector α+I-th of element value,Represent negative norm plate coefficient vector α-'s
I-th of element value,I-th of training sample in positive template is represented,Represent i-th of training sample in negative norm plate;
Step 4.3, using the dictionary A of discriminative model as the input of discriminate algorithm, to kthfIndividual target candidate frame enters
Row processing, obtains kthfGlobal optimum's sparse coefficient of individual target candidate frame
Step 4.3.1, the target formula of discriminate algorithm is defined using formula (2):
In formula (2),For the weight matrix of discriminate algorithm;λ1For the adjustment parameter of discriminate algorithm, λ1> 0,
By minimizing reconstructed error and l1Norm solves the sparse solution that can obtain x
Step 4.3.2, initialize discriminate algorithm iterations t=0, random initializtion discriminate algorithm it is initial dilute
Sparse coefficient isAnd obtain the weight matrix diag (W of diagonalization using formula (3)1):
diag(W1)=[dist (x, A1),dist(x,A2),…dist(x,Aj),…,dist(x,An)] (3)
In formula (6), AjJth column element value in the dictionary A of the discriminative model is represented, 1≤j≤n, n are discriminate moulds
The columns of the dictionary of type, dist (x, Aj) represent kthfThe vectorial and word of the discriminative model corresponding to individual target candidate frame
Euclidean distance in allusion quotation A between jth column element, and have:dist(x,Aj)=| | x-Aj||s, s is adaptation parameter, for adjusting power
The size of value;
Step 4.3.3, the diagonal matrix of the discriminate algorithm of the t times iteration is obtained using formula (4)
In formula (4),For n-th of sparse coefficient value in the sparse coefficient α (t) of the t times iteration;
Step 4.3.4, the sparse coefficient α of the discriminate algorithm of the t+1 times iteration is obtained using formula (5)(t+1):
In formula (5), T is transposition;
Step 4.3.5, judge | | α(t+1)-α(t)| | whether < ε set up, wherein, ε is set coefficient threshold residual value, or
Person, t > tmaxWhether set up, wherein, tmaxFor the maximum iteration of discriminate algorithm, if so, then represent the t times iteration
Sparse coefficient α(t+1)As kthfThe global optimum sparse coefficient α of individual target candidate frame;Otherwise, t+1 is assigned to t, and returned
Step 4.3.3 is performed.
Step 4.4, global optimum sparse coefficient α is split as positive vectorAnd negative vectorAnd count
Calculate positive reconstructed errorWith negative reconstructed errorX is kthfCorresponding to individual target candidate frame
Vector;If a candidate frame is smaller with the restructuring error of positive template set representations, got over the restructuring error of negative norm plate set representations
Greatly, then illustrate this candidate frame be track target possibility it is bigger;An if on the contrary, candidate frame positive template set representations
It is bigger to recombinate error, and it is smaller with the restructuring error of negative norm plate set representations, then and this candidate frame is the possibility for tracking target
With regard to smaller;
Step 4.5, using formula (6) obtain kthfIndividual target candidate frame discriminate scoring
In formula (6), σ is adjustment parameter, for balancing the proportion between global template matches and local histogram's model;It is with positive template collectionThe restructuring error for being recombinated to obtain to candidate frame x,It is
Corresponding positive sparse coefficient vector;Similarly,It is to utilize negative template setIt is dilute with bearing to candidate frame x
Sparse coefficientRepresent obtained restructuring error;
Step 5, using weighting the Productive Algorithm of rarefaction representation to kth in f-th of frame image datafIndividual target candidate frame
Handled, obtain kthfThe production scoring of individual target candidate frame
F-th step 5.1, input of frame image data;
Step 5.2, the view data of input is divided into M fritter, the size of each fritter is 6 × 6, and each fritter
Gray value be converted to a gray value vectors, vector magnitude p, i.e. p=36;
Step 5.3, the gray scale using K-means clustering algorithms to the fritter where target is tracked in the first frame image data
Value vector is handled, and obtains the dictionary of production modelC is the number of cluster centre, and c is set into 50;
Step 5.4, using sliding window by kthfIndividual target candidate frame is divided into m image fritter;
Step 5.5, using the dictionary D of production model as the input of Productive Algorithm, to kthfIn individual target candidate frame
I-th of image fritter yiHandled, obtain i-th of image fritter yiGlobal optimum's sparse coding coefficientSo as to
Obtain global optimum's sparse coding coefficient of m image fritter, 1≤i≤m;
Step 5.5.1, the target formula of Productive Algorithm is defined using formula (7):
In formula (7),For the weight matrix of Productive Algorithm;λ2For the adjustment parameter of Productive Algorithm, λ2> 0;
Step 5.5.2, iterations t '=0 of initialization Productive Algorithm, i-th of random initializtion Productive Algorithm
Image fritter yiInitial sparse coefficient beAnd obtain the weight matrix diag (W of diagonalization using formula (10)2):
diag(W2)=[dist (yi,D1),dist(yi,D2),…dist(yi,Dj),…,dist(yi,Dc)] (8)
In formula (8), DjRepresent jth column element value, 1≤j≤c, dist (y in the dictionary D of the production modeli,Dj) table
Show i-th of image fritter yiWith the Euclidean distance between jth column element value in the dictionary D of the production model, and have:dist
(yi,Dj)=| | yi-Dj||s, s is adaptation parameter, for adjusting the size of weights;
Step 5.5.3, the diagonal matrix of the Productive Algorithm of the secondary iteration of t ' is obtained using formula (11)
In formula (9),For i-th of image fritter y of the secondary iteration of t 'iSparse coefficientIn c-th of sparse coefficient
Value;
Step 5.5.4, i-th of image fritter y of+1 iteration of t ' is obtained using formula (10)iSparse coefficient
In formula (10), T is transposition;
Step 5.5.5, judgeWhether set up, wherein, ε ' is another set coefficient residual error threshold
Value, or, t ' > t 'maxWhether set up, wherein, t 'maxFor the maximum iteration of Productive Algorithm, if so, then represent
I-th of image fritter y of+1 iteration of t 'iSparse coefficientAs i-th of image fritter yiGlobal optimum's sparse coding
Factor betai;Otherwise, t '+1 is assigned to t ', and return to step 5.4.3 is performed.
Step 5.6, using formula (11) global optimum's sparse coding coefficient of m image fritter is stitched together, obtains
kfThe sparse coding coefficient of individual target candidate frame
In formula (11), T is transposition;
Step 5.7, according to kthfThe sparse coding coefficient of individual target candidate frameKth is calculated using formula (12)fIt is individual
The production scoring of target candidate frame
In formula (12),Represent sparse coding coefficientIn j-th of element value;RepresentJ-th of element value, feature
TemplateBe the tracking target frame chosen in first frame image data is handled to obtain by step 5.4- steps 5.6 it is dilute
Code coefficient is dredged, by the sparse coding coefficient of target candidate frameWith template characteristicA production can be obtained using function is intersected
Raw formula scoring
Step 6, according to kthfThe discriminate scoring of individual target candidate frameScored with productionObtained using formula (13)
To kthfThe final score of individual target candidate frame
In formula (13), by kthfThe discriminate scoring of individual target candidate frameScored with productionWith reference to multiplication, and profit
The proportion between global template matches and local histogram's model is balanced with adjustment factor σ, thus obtains kthfIndividual target
The score of candidate frame
Step 7, judge kf< kmaxWhether set up, if so, then by kf+ 1 is assigned to kfAfterwards, return to step 4 performs, so as to
Obtain kmaxThe final score of individual target candidate frame, from kmaxMaximum is chosen in the final score of individual target candidate frame as f
The tracking result of individual frame image data, while in the video sequence can be marked moving target with a rectangle frame;Otherwise,
After f+1 is assigned into f, return to step 3 performs, until f > fmaxUntill, so as to realize the tracking of moving target.
Claims (3)
1. it is a kind of based on the method for tracking target for weighting sparse coordination model, it is characterized in that carrying out as follows:
One section of step 1, selection frame number are fmaxVideo sequence, and choose from the first frame image data the target of tracking;
Step 2, any one frame of definition are f, and initialize f=1;
Step 3, using particle filter algorithm f frame image datas in the video sequence are handled, obtain kmaxIndividual target
Candidate frame, and each target candidate frame is normalized;
Step 4, using weighting the discriminate algorithm of rarefaction representation to the kth after being normalized in the f frame image datasfIndividual mesh
Mark candidate frame is handled, and obtains kthfThe discriminate scoring of individual target candidate frame
Step 4.1, judge whether f=1 sets up, if so, after f+1 then is assigned into f, return to step 3;Otherwise, step is performed
4.2;
Step 4.2, choose n around the tracking result of f-1 frame image datas1Individual positive templateAnd n2Individual negative norm
PlateAnd have I-th of training sample in positive template is represented,
I-th of training sample in negative norm plate is represented, andQ is the vector magnitude of the target candidate frame;By the n1It is individual
Positive template A+And n2Individual negative norm plate A-The dictionary of discriminative model is remerged into after normalizedAnd n=n1+
n2;
Step 4.3, using the dictionary A of the discriminative model as the input of discriminate algorithm, to kthfIndividual target candidate frame is carried out
Processing, obtains kthfGlobal optimum's sparse coefficient of individual target candidate frame
Step 4.4, global optimum's sparse coefficient α is split as positive vectorAnd negative vectorAnd calculate
Positive reconstructed errorWith negative reconstructed errorX is kthfCorresponding to individual target candidate frame
Vector;
Step 4.5, using formula (1) obtain kthfIndividual target candidate frame discriminate scoring
<mrow>
<msub>
<mi>H</mi>
<msub>
<mi>k</mi>
<mi>f</mi>
</msub>
</msub>
<mo>=</mo>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>&epsiv;</mi>
<mo>+</mo>
</msub>
<mo>-</mo>
<msub>
<mi>&epsiv;</mi>
<mo>-</mo>
</msub>
</mrow>
<mi>&sigma;</mi>
</mfrac>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (1), σ is adjustment parameter;
Step 5, using weighting the Productive Algorithm of rarefaction representation to kth in f-th of frame image datafIndividual target candidate frame
Handled, obtain the kthfThe production scoring of individual target candidate frame
F-th step 5.1, input of frame image data;
Step 5.2, the view data of input is divided into M fritter, and the gray value of each fritter is converted to an ash
Angle value vector, vector magnitude p;
Step 5.3, the gray scale using K-means clustering algorithms to the fritter where target is tracked in first frame image data
Value vector is handled, and obtains the dictionary of production modelC is the number of cluster centre;
Step 5.4, using sliding window by the kthfIndividual target candidate frame is divided into m image fritter;
Step 5.5, using the dictionary D of the production model as the input of the Productive Algorithm, to the kthfIndividual target is waited
Select i-th of image fritter y in frameiHandled, obtain i-th of image fritter yiGlobal optimum's sparse coding coefficientSo as to obtain global optimum's sparse coding coefficient of m image fritter, 1≤i≤m;
Step 5.6, using formula (2) global optimum's sparse coding coefficient of m image fritter is stitched together, obtains the kthf
The sparse coding coefficient of individual target candidate frame
<mrow>
<msub>
<mi>&rho;</mi>
<msub>
<mi>k</mi>
<mi>f</mi>
</msub>
</msub>
<mo>=</mo>
<mo>&lsqb;</mo>
<msubsup>
<mi>&beta;</mi>
<mn>1</mn>
<mi>T</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mn>2</mn>
<mi>T</mi>
</msubsup>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mi>i</mi>
<mi>T</mi>
</msubsup>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mi>m</mi>
<mi>T</mi>
</msubsup>
<mo>&rsqb;</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (2), T is transposition;
Step 5.7, according to the kthfThe sparse coding coefficient of individual target candidate frameThe kth is calculated using formula (3)f
The production scoring of individual target candidate frame
In formula (3),Represent the sparse coding coefficientIn j-th of element value;Represent feature templatesIn j-th of element
Value, the feature templatesIt is that the tracking target frame chosen in first frame image data is carried out by step 5.4- steps 5.6
Handle obtained sparse coding coefficient;
Step 6, according to the kthfThe discriminate scoring of individual target candidate frameScored with productionObtained using formula (4)
KthfThe final score of individual target candidate frame
<mrow>
<msub>
<mi>HL</mi>
<msub>
<mi>k</mi>
<mi>f</mi>
</msub>
</msub>
<mo>=</mo>
<msub>
<mi>H</mi>
<msub>
<mi>k</mi>
<mi>f</mi>
</msub>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>L</mi>
<msub>
<mi>k</mi>
<mi>f</mi>
</msub>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
Step 7, judge kf< kmaxWhether set up, if so, then by kf+ 1 is assigned to kfAfterwards, return to step 4 performs, so as to obtain
kmaxThe final score of individual target candidate frame, from kmaxMaximum is chosen in the final score of individual target candidate frame as f-th of frame
The tracking result of view data;Otherwise, after f+1 being assigned into f, return to step 3 performs, until f > fmaxUntill, so as to realize
The tracking of moving target.
2. it is according to claim 1 based on the method for tracking target for weighting sparse coordination model, it is characterized in that, the step
4.3 be to carry out according to the following procedure:
Step 4.3.1, the target formula of discriminate algorithm is defined using formula (5):
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mi>&alpha;</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<mi>x</mi>
<mo>-</mo>
<mi>A</mi>
<mi>&alpha;</mi>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mn>2</mn>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msub>
<mi>&lambda;</mi>
<mn>1</mn>
</msub>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>W</mi>
<mn>1</mn>
</msub>
<mi>&alpha;</mi>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>1</mn>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (5),For the weight matrix of discriminate algorithm;λ1For the adjustment parameter of discriminate algorithm;
Step 4.3.2, the iterations t=0 of discriminate algorithm, the initial sparse system of random initializtion discriminate algorithm are initialized
Number isAnd obtain the weight matrix diag (W of diagonalization using formula (6)1):
diag(W1)=[dist (x, A1),dist(x,A2),…dist(x,Aj),…,dist(x,An)] (6)
In formula (6), AjJth column element value in the dictionary A of the discriminative model is represented, 1≤j≤n, n are the words of discriminative model
The columns of allusion quotation, dist (x, Aj) represent kthfIn the dictionary A of the vectorial and discriminative model corresponding to individual target candidate frame
Euclidean distance between j column elements, and have:dist(x,Aj)=| | x-Aj||s, s is adaptation parameter, for adjusting the big of weights
It is small;
Step 4.3.3, the diagonal matrix of the discriminate algorithm of the t times iteration is obtained using formula (7)
<mrow>
<msubsup>
<mi>E</mi>
<mn>1</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mi>d</mi>
<mi>i</mi>
<mi>a</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msqrt>
<msubsup>
<mi>&alpha;</mi>
<mn>1</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
</msqrt>
<mo>,</mo>
<msqrt>
<msubsup>
<mi>&alpha;</mi>
<mn>2</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
</msqrt>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msqrt>
<msubsup>
<mi>&alpha;</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
</msqrt>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (7),For the sparse coefficient α of the t times iteration(t)In n-th of sparse coefficient value;
Step 4.3.4, the sparse coefficient α of the discriminate algorithm of the t+1 times iteration is obtained using formula (8)(t+1):
<mrow>
<msup>
<mi>&alpha;</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>=</mo>
<msubsup>
<mi>E</mi>
<mn>1</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msubsup>
<mi>E</mi>
<mn>1</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
<msup>
<mi>A</mi>
<mi>T</mi>
</msup>
<msubsup>
<mi>AE</mi>
<mn>1</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>+</mo>
<mn>0.5</mn>
<msub>
<mi>&lambda;</mi>
<mn>1</mn>
</msub>
<msub>
<mi>W</mi>
<mn>1</mn>
</msub>
<mo>&rsqb;</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msubsup>
<mi>E</mi>
<mn>1</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
<msup>
<mi>A</mi>
<mi>T</mi>
</msup>
<mi>x</mi>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (8), T is transposition;
Step 4.3.5, judge | | α(t+1)-α(t)| | whether < ε set up, wherein, ε is set coefficient threshold residual value, or, t
> tmaxWhether set up, wherein, tmaxFor the maximum iteration of discriminate algorithm, if so, then represent the t times iteration
Sparse coefficient α(t+1)As kthfThe global optimum sparse coefficient α of individual target candidate frame;Otherwise, t+1 is assigned to t, and returned
Step 4.4.3 is performed.
3. it is according to claim 1 based on the method for tracking target for weighting sparse coordination model, it is characterized in that, the step
5.4 be to carry out according to the following procedure:
Step 5.4.1, the target formula of Productive Algorithm is defined using formula (9):
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<msub>
<mi>&beta;</mi>
<mi>i</mi>
</msub>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>D&beta;</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mn>2</mn>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msub>
<mi>&lambda;</mi>
<mn>2</mn>
</msub>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>W</mi>
<mn>2</mn>
</msub>
<msub>
<mi>&beta;</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>1</mn>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>9</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (9),For the weight matrix of Productive Algorithm;λ2For the adjustment parameter of Productive Algorithm;
Step 5.4.2, iterations t '=0 of Productive Algorithm, i-th of image of random initializtion Productive Algorithm are initialized
Fritter yiInitial sparse coefficient beAnd obtain the weight matrix diag (W of diagonalization using formula (10)2):
diag(W2)=[dist (yi,D1),dist(yi,D2),…dist(yi,Dj),…,dist(yi,Dc)] (10)
In formula (10), DjJth column element value in the dictionary D of the production model is represented, 1≤j≤c, c are the cluster centres
Number;dist(yi,Dj) represent i-th of image fritter yiBetween jth column element value in the dictionary D of the production model
Euclidean distance, and have:dist(yi,Dj)=| | yi-Dj||s, s is adaptation parameter, for adjusting the size of weights;
Step 5.4.3, the diagonal matrix of the Productive Algorithm of the secondary iteration of t ' is obtained using formula (11)
<mrow>
<msubsup>
<mi>E</mi>
<mn>2</mn>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mi>d</mi>
<mi>i</mi>
<mi>a</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msqrt>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>i</mi>
<mn>1</mn>
</mrow>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
</msqrt>
<mo>,</mo>
<msqrt>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>i</mi>
<mn>2</mn>
</mrow>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
</msqrt>
<mo>,</mo>
<mo>...</mo>
<msqrt>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>i</mi>
<mi>c</mi>
</mrow>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
</msqrt>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>11</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (11),For i-th of image fritter y of the secondary iteration of t 'iSparse coefficientIn c-th of sparse coefficient value;
Step 5.4.4, i-th of image fritter y of+1 iteration of t ' is obtained using formula (12)iSparse coefficient
<mrow>
<msubsup>
<mi>&beta;</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>E</mi>
<mn>2</mn>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msubsup>
<mi>E</mi>
<mn>2</mn>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
<msup>
<mi>D</mi>
<mi>T</mi>
</msup>
<msubsup>
<mi>DE</mi>
<mn>2</mn>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>+</mo>
<mn>0.5</mn>
<msub>
<mi>&lambda;</mi>
<mn>2</mn>
</msub>
<msub>
<mi>W</mi>
<mn>2</mn>
</msub>
<mo>&rsqb;</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msubsup>
<mi>E</mi>
<mn>2</mn>
<mrow>
<mo>(</mo>
<msup>
<mi>t</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
<msup>
<mi>D</mi>
<mi>T</mi>
</msup>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>12</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (12), T is transposition;
Step 5.3.4, judgeWhether set up, wherein, ε ' is another set coefficient threshold residual value,
Or t ' > t 'maxWhether set up, wherein, t 'maxFor the maximum iteration of Productive Algorithm, if so, then represent described
I-th of image fritter y of+1 iteration of t 'iSparse coefficientAs i-th of image fritter yiGlobal optimum's sparse coding
Factor betai;Otherwise, t '+1 is assigned to t ', and return to step 5.4.3 is performed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710562703.7A CN107341479A (en) | 2017-07-11 | 2017-07-11 | Target tracking method based on weighted sparse cooperation model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710562703.7A CN107341479A (en) | 2017-07-11 | 2017-07-11 | Target tracking method based on weighted sparse cooperation model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107341479A true CN107341479A (en) | 2017-11-10 |
Family
ID=60218699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710562703.7A Pending CN107341479A (en) | 2017-07-11 | 2017-07-11 | Target tracking method based on weighted sparse cooperation model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107341479A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520497A (en) * | 2018-03-15 | 2018-09-11 | 华中科技大学 | Image restoration based on distance weighted sparse expression priori with match integral method |
CN110910424A (en) * | 2019-11-18 | 2020-03-24 | 北京拙河科技有限公司 | Target tracking method and device |
CN111640138A (en) * | 2020-05-28 | 2020-09-08 | 济南博观智能科技有限公司 | Target tracking method, device, equipment and storage medium |
CN112150513A (en) * | 2020-09-27 | 2020-12-29 | 中国人民解放军海军工程大学 | Target tracking algorithm based on sparse identification minimum spanning tree |
CN112837342A (en) * | 2021-02-04 | 2021-05-25 | 闽南师范大学 | Target tracking method, terminal equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682452A (en) * | 2012-04-12 | 2012-09-19 | 西安电子科技大学 | Human movement tracking method based on combination of production and discriminant |
-
2017
- 2017-07-11 CN CN201710562703.7A patent/CN107341479A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682452A (en) * | 2012-04-12 | 2012-09-19 | 西安电子科技大学 | Human movement tracking method based on combination of production and discriminant |
Non-Patent Citations (5)
Title |
---|
JUNCHI YAN ET AL.: "Weighted sparse coding residual minimization for visual tracking", 《2011 VISUAL COMMUNICATIONS AND IMAGE PROCESSING》 * |
LAI WEI ET AL.: "Weighted discriminative sparsity preserving embedding for face recognition", 《KNOWLEDGE-BASED SYSTEMS》 * |
NIANYI LI ET AL.: "A Weighted Sparse Coding Framework for Saliency Detection", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
WEI ZHONG ET AL.: "Robust Object Tracking via Sparsity-based Collaborative Model", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
孔晨辰: "基于加权组稀疏表示的人脸识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520497A (en) * | 2018-03-15 | 2018-09-11 | 华中科技大学 | Image restoration based on distance weighted sparse expression priori with match integral method |
CN108520497B (en) * | 2018-03-15 | 2020-08-04 | 华中科技大学 | Image restoration and matching integrated method based on distance weighted sparse expression prior |
CN110910424A (en) * | 2019-11-18 | 2020-03-24 | 北京拙河科技有限公司 | Target tracking method and device |
CN111640138A (en) * | 2020-05-28 | 2020-09-08 | 济南博观智能科技有限公司 | Target tracking method, device, equipment and storage medium |
CN111640138B (en) * | 2020-05-28 | 2023-10-27 | 济南博观智能科技有限公司 | Target tracking method, device, equipment and storage medium |
CN112150513A (en) * | 2020-09-27 | 2020-12-29 | 中国人民解放军海军工程大学 | Target tracking algorithm based on sparse identification minimum spanning tree |
CN112837342A (en) * | 2021-02-04 | 2021-05-25 | 闽南师范大学 | Target tracking method, terminal equipment and storage medium |
CN112837342B (en) * | 2021-02-04 | 2023-04-25 | 闽南师范大学 | Target tracking method, terminal equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107341479A (en) | Target tracking method based on weighted sparse cooperation model | |
Liu et al. | Study on SVM compared with the other text classification methods | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
Cai et al. | Facial expression recognition method based on sparse batch normalization CNN | |
CN108304826A (en) | Facial expression recognizing method based on convolutional neural networks | |
CN107122712B (en) | Palm print image identification method based on CNN and bidirectional VLAD | |
CN106651915B (en) | The method for tracking target of multi-scale expression based on convolutional neural networks | |
CN109241995B (en) | Image identification method based on improved ArcFace loss function | |
CN106845499A (en) | A kind of image object detection method semantic based on natural language | |
CN107122809A (en) | Neural network characteristics learning method based on image own coding | |
CN103514443B (en) | A kind of single sample recognition of face transfer learning method based on LPP feature extraction | |
CN105760821A (en) | Classification and aggregation sparse representation face identification method based on nuclear space | |
CN103778414A (en) | Real-time face recognition method based on deep neural network | |
CN113032613B (en) | Three-dimensional model retrieval method based on interactive attention convolution neural network | |
CN106503672A (en) | A kind of recognition methods of the elderly's abnormal behaviour | |
CN109934158A (en) | Video feeling recognition methods based on local strengthening motion history figure and recursive convolution neural network | |
CN110516098A (en) | Image labeling method based on convolutional neural networks and binary coding feature | |
CN112183435A (en) | Two-stage hand target detection method | |
CN107169117A (en) | A kind of manual draw human motion search method based on autocoder and DTW | |
CN107578056A (en) | A kind of manifold learning system integrated classical model and be used for sample dimensionality reduction | |
CN108520213A (en) | A kind of face beauty prediction technique based on multiple dimensioned depth | |
CN106778501A (en) | Video human face ONLINE RECOGNITION method based on compression tracking with IHDR incremental learnings | |
CN110110800A (en) | Automatic image marking method, device, equipment and computer readable storage medium | |
CN104994366A (en) | FCM video key frame extracting method based on feature weighing | |
CN103294832A (en) | Motion capture data retrieval method based on feedback study |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171110 |