CN103839273A - Real-time detection tracking frame and tracking method based on compressed sensing feature selection - Google Patents

Real-time detection tracking frame and tracking method based on compressed sensing feature selection Download PDF

Info

Publication number
CN103839273A
CN103839273A CN201410113641.8A CN201410113641A CN103839273A CN 103839273 A CN103839273 A CN 103839273A CN 201410113641 A CN201410113641 A CN 201410113641A CN 103839273 A CN103839273 A CN 103839273A
Authority
CN
China
Prior art keywords
sample
represent
tracking
lambda
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410113641.8A
Other languages
Chinese (zh)
Other versions
CN103839273B (en
Inventor
何发智
李康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201410113641.8A priority Critical patent/CN103839273B/en
Publication of CN103839273A publication Critical patent/CN103839273A/en
Application granted granted Critical
Publication of CN103839273B publication Critical patent/CN103839273B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a real-time detection tracking frame and tracking method based on compressed sensing feature selection. According to the tracking frame and tracking method, compressed features can be selected, and only the sample characteristic with a high distinction degree is used for classification. Real-time tracking can be achieved through the frame and method, the tracking failure phenomenon caused by selection of wrong features is avoided, the influence of bad features on the tracking result is effectively restrained, the tracking speed is obviously increased, and the tracking precision is obviously improved.

Description

Framework and tracking are followed the tracks of in real-time detection based on compressed sensing feature selecting
Technical field
The present invention relates to target tracking domain, relate in particular to a kind of real-time detection based on compressed sensing feature selecting and follow the tracks of framework and tracking.
Background technology
Known, target following is a key areas in computer vision technique, and in military affairs, medical treatment, has important application in monitoring and man-machine interaction.There are many methods recent years for solving the problem of target following, but due to the deformation of target, the variation of illumination, and the target reason such as be blocked, target following remains a difficult point.
The method for real time tracking of main flow all has adaptivity at present.In general tracking can be divided into two classes: generation method and method of discrimination.Generation method can learning objective characteristic model, then search for the region that target may place, the region that uses the model of having learnt to rebuild with least error is target position.In order to solve target deformation problems, WSL and IVT method are successively suggested.Recently, rarefaction representation method is with solving target by the problem of partial occlusion.But these generation models all do not utilize target background information around, these background informations can be in the time detecting target better by target and background separation out.
Discrimination model is regarded target following as a kind of by target and background separation test problems out.In discrimination model, use the good feature of separating capacity can effectively improve tracking accuracy.Use the boosting method of multiple Weak Classifier composition strong classifiers to be widely used at present.But many boosting methods have only been utilized the information of target itself, do not utilize the information of target background, so after target is not accurately detected, the precision of target following after will affecting, finally causes following the tracks of unsuccessfully.Recently, MIL method shows to utilize background information to use high dimensional feature Random Maps to become low dimensional feature to classify, and can effectively target and background area be separated.But because the feature of Random Maps is not likely suitable for classification, this produces adverse influence by causing using the bad feature of discrimination to carry out classification meeting to the effect of target following.
Summary of the invention
The present invention, in order to solve above-mentioned technical matters, has proposed a kind of real-time detection based on compressed sensing feature selecting and has followed the tracks of framework and tracking.
Technical scheme of the present invention is: framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting, comprise the steps:
Step 1, initiation parameter, determines the target of t frame
Figure BDA0000481941460000021
described target location is a rectangle frame, is the target that needs tracking in frame;
Figure BDA0000481941460000022
comprise four parameters: the row-coordinate of target in this frame
Figure BDA0000481941460000023
row coordinate
Figure BDA0000481941460000024
width width and height height.
Step 2, makes l (x)={ row (x), col (x) } represent the position of sample x, comprises row-coordinate and the row coordinate of center of a sample,
Figure BDA0000481941460000025
in radius s pixel, gather positive sample set around X t s = { x t | | | l ( x t ) - l ( x t * ) | | ≤ s } ; ?
Figure BDA0000481941460000027
radius r around, gathers negative sample collection between β pixel X t r , β = { x t | r ≤ | | l ( x t * ) - l ( x t ) | | ≤ β } ; Wherein s, r, β is empirical parameter, unit is pixel.
Step 3, aligns negative sample and concentrates each sample
Figure BDA0000481941460000029
extract n class Lis Hartel and levy f (x t)={ f 1(x t), f 2(x t) ..., f n(x t).
Step 4, is used the features training Weak Classifier of positive sample and negative sample.
Step 5, defining classification interval is assessed each Weak Classifier, and selects Weak Classifier to be used for classification.
Step 6, is made up of the strong classifier of t frame the Weak Classifier choosing
Figure BDA00004819414600000210
Step 7, at t+1 frame
Figure BDA00004819414600000211
collecting test sample in radius λ around
Figure BDA00004819414600000212
use H t(x) test sample book is classified.
Step 8, classification results is the position at t+1 frame as target
Step 9, if t+1 frame is not last frame, makes t=t+1, returns to step 1.
And, s=4 in described step 2, the span that the span of r is 6~10, β is 15~30.
And, further comprising the steps of in described step 3:
3.1 are provided with template set
Figure BDA00004819414600000214
wherein element definition is:
Figure BDA0000481941460000031
Being w for width, is highly the sample of h, carries out convolution with template set T, altogether can generate m=(wh) 2individual feature;
3.2 use stochastic matrix R n × mthe m of a sample feature is compressed, and n represents the vectorial dimension after compression.Element r in R ijmeet the following conditions:
Figure BDA0000481941460000032
Use this matrix to compress the high dimensional feature of sample x.After compression, be characterized as f (x t)={ f 1(x t), f 2(x t) ..., f n(x t)=RT (x t), wherein T (x t) represent that practical template set T is to sample x tcarry out the vector of m feature of convolution extraction; Wherein, empirical parameter ζ represents degree of rarefication.
And, in described step 3.2, represent degree of rarefication empirical parameter ζ=4.
And described step 4 is further comprising the steps of:
The feature of the 4.1 positive samples of hypothesis and negative sample meets normal distribution, and positive sample obedience sample characteristics average is μ 1, sample characteristics standard deviation is σ 1normal distribution N (μ 1, σ 1), be designated as p (f (x) | y=1)~N (μ 1, σ 1), it is μ that negative sample is obeyed sample characteristics average 0, sample characteristics standard deviation is σ 0normal distribution N (μ 0, σ 0), be designated as p (f (x) | y=0)~N (μ 0, σ 0); Utilize positive negative sample to obtain parameter μ 1, σ 1, μ 0and σ 0;
The parameter update mode of positive sample is: μ t 1 = λ μ t - 1 1 + ( 1 - λ ) μ ~ t 1
σ t 1 = λ ( σ t - 1 1 ) 2 + ( 1 - λ ) ( σ ~ t 1 ) 2 + λ ( 1 - λ ) ( μ t - 1 1 - μ t 1 ) 2
Wherein, λ is learning rate,
Figure BDA0000481941460000035
represent the positive sample characteristics average of t moment of directly obtaining,
Figure BDA0000481941460000036
represent the study positive sample characteristics average in rear t moment, represent the study positive sample characteristics average in rear t-1 moment,
Figure BDA0000481941460000038
represent the standard deviation of the positive sample characteristics in rear t moment of study,
Figure BDA0000481941460000039
the standard deviation of the positive sample characteristics that expression Direct Sampling is obtained,
Figure BDA00004819414600000310
represent the standard deviation of the positive sample in t-1 moment;
The parameter update mode of negative sample is: μ t 0 = λ μ t - 1 0 + ( 1 - λ ) μ ~ t 0
σ t 0 = λ ( σ t - 1 0 ) 2 + ( 1 - λ ) ( σ ~ t 0 ) 2 + λ ( 1 - λ ) ( μ t - 1 0 - μ t 0 ) 2
Wherein, λ is learning rate,
Figure BDA0000481941460000043
represent the t moment negative sample characteristic mean of directly obtaining,
Figure BDA0000481941460000044
represent the study negative sample characteristic mean in rear t moment, represent the study negative sample characteristic mean in rear t-1 moment,
Figure BDA0000481941460000046
represent the standard deviation of the negative sample feature in rear t moment of study,
Figure BDA0000481941460000047
the standard deviation of the negative sample feature that expression Direct Sampling is obtained,
Figure BDA0000481941460000048
represent the standard deviation of the negative sample in t-1 moment;
4.2 Weak Classifier
Figure BDA0000481941460000049
h k(x) represent sample x to use k tagsort.Wherein
Figure BDA00004819414600000410
f k(x) k the feature of expression sample x.
And the learning rate λ in described step 4.1 and 4.2 gets 0.85.
And, in described step 5, comprise the following steps:
5.1 definition m arg in ( h k ) = 1 n p Σ i = 1 n p h k ( x i ) - 1 n n Σ j = 1 n n h k ( x j ) , Wherein n prepresent the number of positive sample, n nrepresent the number of negative sample, x i ∈ X t s , x j ∈ X t r , β ;
5.2 select front K maximum feature of margin value
Figure BDA00004819414600000413
composition strong classifier, wherein i krepresent to select the feature for classifying;
The strong classifier of 5.3 t frames is defined as
Figure BDA00004819414600000414
And, in described step 7, comprise the following steps:
7.1 at t+1 frame
Figure BDA00004819414600000415
collecting test sample in radius λ pixel around
Figure BDA00004819414600000416
7.2 obtain the classification results of each sample
Figure BDA00004819414600000417
7.3 targets are
Figure BDA00004819414600000418
the position of target in t+1 frame is
Figure BDA00004819414600000419
As preferably, in described step 2, the quantity of the negative sample of getting is 1.5 times of positive sample collection quantity.
The invention has the beneficial effects as follows: the present invention (CFS) contrasts with the result of current other popular trackings, is better than other the whole bag of tricks from precision.The present invention can reach real-time follow-up, and speed is obviously better than most tracking.Secondly, the present invention is also obviously better than most of trackings in tracking accuracy.Finally, the present invention has avoided due to the tracking failure phenomenon of having selected wrong feature to cause, has effectively suppressed the impact on tracking results of bad feature.
Brief description of the drawings
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is Margin schematic diagram of the present invention;
Fig. 3 is test video sequence david result comparison diagram.
Embodiment
In order to overcome above-mentioned the deficiencies in the prior art, the invention provides a kind of real-time detection based on compressed sensing feature selecting and follow the tracks of framework and tracking.The present invention can select the feature after compression, only uses the sample characteristics that discrimination is high to classify, and can obviously improve tracking velocity and tracking accuracy.
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Framework and a tracking are followed the tracks of in real-time detection based on compressed sensing feature selecting, comprise the steps:
Step 1, initiation parameter, determines the target of t frame
Figure BDA0000481941460000051
described target location is a rectangle frame, is the target that needs tracking in frame;
Figure BDA0000481941460000052
comprise four parameters: the row-coordinate of target in this frame
Figure BDA0000481941460000053
row coordinate
Figure BDA0000481941460000054
width width and height height.
Step 2, makes l (x)={ row (x), col (x) } represent the position of sample x, comprises row-coordinate and the row coordinate of center of a sample,
Figure BDA0000481941460000055
in radius s pixel, gather positive sample set around X t s = { x t | | | l ( x t ) - l ( x t * ) | | ≤ s } ; ?
Figure BDA0000481941460000057
radius r around, gathers negative sample collection between β pixel X t r , β = { x t | r ≤ | | l ( x t * ) - l ( x t ) | | ≤ β } ; Wherein s, r, β is empirical parameter, unit is pixel, s=4 in the present invention, the span that the span of r is 6~10, β is 15~30, and the collection quantity of the negative sample of getting is generally 1.5 times of positive sample collection quantity, and this test gathers 70 negative samples.
Step 3, aligns negative sample and concentrates each sample extract n class Lis Hartel and levy f (x t)={ f 1(x t), f 2(x t) ..., f n(x t), step 3 is further comprising the steps of:
A stochastic matrix R ∈ R n × mcan be by a high dimension vector x ∈ R mbe mapped to the vector space v ∈ R of low-dimensional n, v=Rx, here n < < m.If stochastic matrix R meets the limited isometry of RIP() condition, the position relationship of former vector in vector space that can preserve value of the vector after shining upon.Gauss's matrix meets the limited isometry of RIP() condition, but Gauss's matrix sparse not (in matrix neutral element number be directly proportional to the sparse degree of matrix), if can increase calculated amount for compressing.
3.1 are provided with template set T={h ij} i=1:w; J=1:h, wherein element definition is:
Being w for width, is highly the sample of h, carries out convolution with template set T, altogether can generate m=(wh) 2individual feature;
3.2 use stochastic matrix R n × mthe m of a sample feature is compressed, and n represents the vectorial dimension after compression.Element r in R ijmeet the following conditions:
Figure BDA0000481941460000062
Use this matrix to compress the high dimensional feature of sample x.After compression, be characterized as f (x t)={ f 1(x t), f 2(x t) ..., f n(x t)=RT (x t), wherein T (x t) represent that practical template set T is to sample x tcarry out the vector of m feature of convolution extraction; Wherein, empirical parameter ζ represents degree of rarefication, empirical parameter ζ=4 in the present invention.
Step 4, is used the features training Weak Classifier of positive sample and negative sample, and step 4 is further comprising the steps of:
The feature of the 4.1 positive samples of hypothesis and negative sample meets normal distribution, and positive sample obedience sample characteristics average is μ 1, sample characteristics standard deviation is σ 1normal distribution N (μ 1, σ 1), be designated as p (f (x) | y=1)~N (μ 1, σ 1), it is μ that negative sample is obeyed sample characteristics average 0, sample characteristics standard deviation is σ 0normal distribution N (μ 0, σ 0), be designated as p (f (x) | y=0)~N (μ 0, σ 0); Utilize positive negative sample to obtain parameter μ 1, σ 1, μ 0and σ 0;
The parameter update mode of positive sample is: &mu; t 1 = &lambda; &mu; t - 1 1 + ( 1 - &lambda; ) &mu; ~ t 1
&sigma; t 1 = &lambda; ( &sigma; t - 1 1 ) 2 + ( 1 - &lambda; ) ( &sigma; ~ t 1 ) 2 + &lambda; ( 1 - &lambda; ) ( &mu; t - 1 1 - &mu; t 1 ) 2
Wherein, λ is learning rate,
Figure BDA0000481941460000065
represent the positive sample characteristics average of t moment of directly obtaining, represent the study positive sample characteristics average in rear t moment,
Figure BDA0000481941460000067
represent the study positive sample characteristics average in rear t-1 moment,
Figure BDA0000481941460000068
represent the standard deviation of the positive sample characteristics in rear t moment of study,
Figure BDA0000481941460000071
the standard deviation of the positive sample characteristics that expression Direct Sampling is obtained,
Figure BDA0000481941460000072
represent the standard deviation of the positive sample in t-1 moment;
The parameter update mode of negative sample is: &mu; t 0 = &lambda; &mu; t - 1 0 + ( 1 - &lambda; ) &mu; ~ t 0
&sigma; t 0 = &lambda; ( &sigma; t - 1 0 ) 2 + ( 1 - &lambda; ) ( &sigma; ~ t 0 ) 2 + &lambda; ( 1 - &lambda; ) ( &mu; t - 1 0 - &mu; t 0 ) 2
Wherein, λ is learning rate,
Figure BDA0000481941460000075
represent the t moment negative sample characteristic mean of directly obtaining,
Figure BDA0000481941460000076
represent the study negative sample characteristic mean in rear t moment,
Figure BDA0000481941460000077
represent the study negative sample characteristic mean in rear t-1 moment,
Figure BDA0000481941460000078
represent the standard deviation of the negative sample feature in rear t moment of study,
Figure BDA0000481941460000079
the standard deviation of the negative sample feature that expression Direct Sampling is obtained,
Figure BDA00004819414600000710
represent the standard deviation of the negative sample in t-1 moment;
4.2 Weak Classifier
Figure BDA00004819414600000711
h k(x) represent sample x to use k tagsort.Wherein
Figure BDA00004819414600000712
f k(x) k the feature of expression sample x;
Wherein said learning rate λ gets 0.85.
Step 5, defining classification interval is assessed each Weak Classifier, and selects Weak Classifier to be used for classification, and step 5 comprises the following steps:
5.1 definition m arg in ( h k ) = 1 n p &Sigma; i = 1 n p h k ( x i ) - 1 n n &Sigma; j = 1 n n h k ( x j ) , Wherein n prepresent the number of positive sample, n nrepresent the number of negative sample, x i &Element; X t s , x j &Element; X t r , &beta; ;
As shown in Figure 2, circle represents positive sample, and triangle represents negative sample.Horizontal ordinate represents first feature of sample, and ordinate represents second feature of sample.In the time using sample Second Characteristic to classify, can better positive and negative sample area be separated.The Weak Classifier that margin is larger, shows that the separating capacity of this sorter is stronger.
5.2 select front K maximum feature of margin value
Figure BDA00004819414600000715
composition strong classifier, wherein i krepresent to select the feature for classifying;
The strong classifier of 5.3 t frames is defined as
Figure BDA00004819414600000716
Step 6, is made up of the strong classifier of t frame the Weak Classifier choosing
Figure BDA00004819414600000717
Step 7, at t+1 frame
Figure BDA0000481941460000081
collecting test sample in radius λ around
Figure BDA0000481941460000082
use H t(x) test sample book is classified, in step 7, comprises the following steps:
7.1 at t+1 frame
Figure BDA0000481941460000083
collecting test sample in radius λ pixel around
Figure BDA0000481941460000084
7.2 obtain the classification results of each sample
Figure BDA0000481941460000085
7.3 targets are the position of target in t+1 frame is
Figure BDA0000481941460000087
Step 8, classification results is the position at t+1 frame as target
Figure BDA0000481941460000088
Step 9, if t+1 frame is not last frame, makes t=t+1, returns to step 1.
As shown in table 1, framework and tracking (CFS) are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting, and on each tracking and testing collection, to carry out test chart now good, average accuracy (average) is higher than additive method, wherein (CT) is compressed sensing tracking, is (FRAG) piecemeal tracking.Illustrate that the present invention (CFS) is better than additive method in tracking accuracy.
Table 1 is followed the tracks of accuracy
Cycle tests CT FRAG CFS
bolt 0.0067 0.1233 0.8523
david 0.3843 0.0955 0.7622
Tiger 0.3894 0.3037 0.5473
FaceOcc1 0.8789 0.9899 0.9092
FaceOcc2 0.6995 0.7217 0.6897
Fish 0.8739 0.5294 0.6933
Jumping 0.0064 0.8371 0.8037
Tigger2 0.4188 0.1406 0.6423
Shaking 0.0411 0.0877 0.5863
average 0.411 0.425423 0.7207

Claims (9)

1. framework and a tracking are followed the tracks of in the real-time detection based on compressed sensing feature selecting, it is characterized in that: comprise the steps:
Step 1, initiation parameter, determines the target of t frame
Figure FDA0000481941450000011
described target location is a rectangle frame, is the target that needs tracking in frame;
Figure FDA0000481941450000012
comprise four parameters: the row-coordinate of target in this frame row coordinate
Figure FDA0000481941450000014
width width and height height;
Step 2, makes l (x)={ row (x), col (x) } represent the position of sample x, comprises row-coordinate and the row coordinate of center of a sample, in radius s pixel, gather positive sample set around X t s = { x t | | | l ( x t ) - l ( x t * ) | | &le; s } ; ?
Figure FDA0000481941450000017
radius r around, gathers negative sample collection between β pixel X t r , &beta; = { x t | r &le; | | l ( x t * ) - l ( x t ) | | &le; &beta; } ; Wherein s, r, β is empirical parameter, unit is pixel;
Step 3, aligns negative sample and concentrates each sample
Figure FDA0000481941450000019
extract n class Lis Hartel and levy f (x t)={ f 1(x t), f 2(x t) ..., f n(x t);
Step 4, is used the features training Weak Classifier of positive sample and negative sample;
Step 5, defining classification interval is assessed each Weak Classifier, and selects Weak Classifier to be used for classification;
Step 6, is made up of the strong classifier of t frame the Weak Classifier choosing
Figure FDA00004819414500000110
Step 7, at t+1 frame
Figure FDA00004819414500000111
collecting test sample in radius λ around
Figure FDA00004819414500000112
use H t(x) test sample book is classified;
Step 8, classification results is the position at t+1 frame as target
Figure FDA00004819414500000113
Step 9, if t+1 frame is not last frame, makes t=t+1, returns to step 1.
2. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 1, it is characterized in that: s=4 in described step 2, the span that the span of r is 6~10, β is 15~30.
3. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 1, it is characterized in that: further comprising the steps of in described step 3:
3.1 are provided with template set T={h ij} i=1:w; J=1:h, wherein element definition is:
Figure FDA00004819414500000114
Being w for width, is highly the sample of h, carries out convolution with template set T, altogether can generate m=(wh) 2individual feature;
3.2 use stochastic matrix R n × mthe m of a sample feature is compressed, and n represents the vectorial dimension after compression, the element r in R ijmeet the following conditions:
Figure FDA0000481941450000021
Use this matrix to compress the high dimensional feature of sample x, after compression, be characterized as f (x t)={ f 1(x t), f 2(x t) ..., f n(x t)=RT (x t), wherein T (x t) represent that practical template set T is to sample x tcarry out the vector of m feature of convolution extraction; Wherein, empirical parameter ζ represents degree of rarefication.
4. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 3, it is characterized in that: in described step 3.2, represent degree of rarefication empirical parameter ζ=4.
5. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 1, it is characterized in that: described step 4 is further comprising the steps of:
The feature of the 4.1 positive samples of hypothesis and negative sample meets normal distribution, and positive sample obedience sample characteristics average is μ 1, sample characteristics standard deviation is σ 1normal distribution N (μ 1, σ 1), be designated as p (f (x) | y=1)~N (μ 1, σ 1), it is μ that negative sample is obeyed sample characteristics average 0, sample characteristics standard deviation is σ 0normal distribution N (μ 0, σ 0), be designated as p (f (x) | y=0)~N (μ 0, σ 0); Utilize positive negative sample to obtain parameter μ 1, σ 1, μ 0and σ 0;
The parameter update mode of positive sample is: &mu; t 1 = &lambda; &mu; t - 1 1 + ( 1 - &lambda; ) &mu; ~ t 1
&sigma; t 1 = &lambda; ( &sigma; t - 1 1 ) 2 + ( 1 - &lambda; ) ( &sigma; ~ t 1 ) 2 + &lambda; ( 1 - &lambda; ) ( &mu; t - 1 1 - &mu; t 1 ) 2
Wherein, λ is learning rate,
Figure FDA0000481941450000024
represent the positive sample characteristics average of t moment of directly obtaining,
Figure FDA0000481941450000025
represent the study positive sample characteristics average in rear t moment,
Figure FDA0000481941450000026
represent the study positive sample characteristics average in rear t-1 moment,
Figure FDA0000481941450000027
represent the standard deviation of the positive sample characteristics in rear t moment of study,
Figure FDA0000481941450000028
the standard deviation of the positive sample characteristics that expression Direct Sampling is obtained,
Figure FDA0000481941450000029
represent the standard deviation of the positive sample in t-1 moment;
The parameter update mode of negative sample is: &mu; t 0 = &lambda; &mu; t - 1 0 + ( 1 - &lambda; ) &mu; ~ t 0
&sigma; t 0 = &lambda; ( &sigma; t - 1 0 ) 2 + ( 1 - &lambda; ) ( &sigma; ~ t 0 ) 2 + &lambda; ( 1 - &lambda; ) ( &mu; t - 1 0 - &mu; t 0 ) 2
Wherein, λ is learning rate, represent the t moment negative sample characteristic mean of directly obtaining,
Figure FDA0000481941450000032
represent the study negative sample characteristic mean in rear t moment,
Figure FDA0000481941450000033
represent the study negative sample characteristic mean in rear t-1 moment,
Figure FDA0000481941450000034
represent the standard deviation of the negative sample feature in rear t moment of study,
Figure FDA0000481941450000035
the standard deviation of the negative sample feature that expression Direct Sampling is obtained,
Figure FDA0000481941450000036
represent the standard deviation of the negative sample in t-1 moment;
4.2 Weak Classifier
Figure FDA0000481941450000037
h k(x) represent sample x to use k tagsort, wherein
Figure FDA0000481941450000038
f k(x) k the feature of expression sample x.
6. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 5, it is characterized in that: the learning rate λ in described step 4.1 and 4.2 gets 0.85.
7. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 1, it is characterized in that: in described step 5, comprise the following steps:
5.1 definition m arg in ( h k ) = 1 n p &Sigma; i = 1 n p h k ( x i ) - 1 n n &Sigma; j = 1 n n h k ( x j ) , Wherein n prepresent the number of positive sample, n nrepresent the number of negative sample,
Figure FDA00004819414500000310
5.2 select front K maximum feature of margin value composition strong classifier, wherein ik represents to select the feature for classifying;
The strong classifier of 5.3 t frames is defined as
Figure FDA00004819414500000312
8. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 1, it is characterized in that: in described step 7, comprise the following steps:
7.1 at t+1 frame
Figure FDA00004819414500000313
collecting test sample in radius λ pixel around
Figure FDA00004819414500000314
7.2 obtain the classification results of each sample
Figure FDA00004819414500000315
7.3 targets are
Figure FDA00004819414500000316
the position of target in t+1 frame is
Figure FDA00004819414500000317
9. framework and tracking are followed the tracks of in a kind of real-time detection based on compressed sensing feature selecting according to claim 1, it is characterized in that: in described step 2, the quantity of the negative sample of getting is 1.5 times of positive sample collection quantity.
CN201410113641.8A 2014-03-25 2014-03-25 Real-time detection tracking frame and tracking method based on compressed sensing feature selection Expired - Fee Related CN103839273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410113641.8A CN103839273B (en) 2014-03-25 2014-03-25 Real-time detection tracking frame and tracking method based on compressed sensing feature selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410113641.8A CN103839273B (en) 2014-03-25 2014-03-25 Real-time detection tracking frame and tracking method based on compressed sensing feature selection

Publications (2)

Publication Number Publication Date
CN103839273A true CN103839273A (en) 2014-06-04
CN103839273B CN103839273B (en) 2017-02-22

Family

ID=50802739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410113641.8A Expired - Fee Related CN103839273B (en) 2014-03-25 2014-03-25 Real-time detection tracking frame and tracking method based on compressed sensing feature selection

Country Status (1)

Country Link
CN (1) CN103839273B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200493A (en) * 2014-09-05 2014-12-10 武汉大学 Similarity measurement based real-time target tracking algorithm
CN104200216A (en) * 2014-09-02 2014-12-10 武汉大学 High-speed moving target tracking algorithm for multi-feature extraction and step-wise refinement
CN104680554A (en) * 2015-01-08 2015-06-03 深圳大学 SURF-based compression tracing method and system
CN104809466A (en) * 2014-11-28 2015-07-29 安科智慧城市技术(中国)有限公司 Method and device for detecting specific target rapidly
CN105427337A (en) * 2015-10-30 2016-03-23 西北工业大学 Time-delay video sequence motor cell tracking method based on compression perception
CN106023257A (en) * 2016-05-26 2016-10-12 南京航空航天大学 Target tracking method based on rotor UAV platform
CN106651912A (en) * 2016-11-21 2017-05-10 广东工业大学 Compressed sensing-based robust target tracking method
CN106815862A (en) * 2017-01-24 2017-06-09 武汉大学 A kind of target tracking algorism based on convolution contour feature

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020133499A1 (en) * 2001-03-13 2002-09-19 Sean Ward System and method for acoustic fingerprinting
CN103325125A (en) * 2013-07-03 2013-09-25 北京工业大学 Moving target tracking method based on improved multi-example learning algorithm
CN103345735A (en) * 2013-07-16 2013-10-09 上海交通大学 Compressed space-time multi-sensor fusion tracking method based on Kalman filter

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020133499A1 (en) * 2001-03-13 2002-09-19 Sean Ward System and method for acoustic fingerprinting
CN103325125A (en) * 2013-07-03 2013-09-25 北京工业大学 Moving target tracking method based on improved multi-example learning algorithm
CN103345735A (en) * 2013-07-16 2013-10-09 上海交通大学 Compressed space-time multi-sensor fusion tracking method based on Kalman filter

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200216A (en) * 2014-09-02 2014-12-10 武汉大学 High-speed moving target tracking algorithm for multi-feature extraction and step-wise refinement
CN104200493A (en) * 2014-09-05 2014-12-10 武汉大学 Similarity measurement based real-time target tracking algorithm
CN104200493B (en) * 2014-09-05 2017-02-01 武汉大学 Similarity measurement based real-time target tracking algorithm
CN104809466A (en) * 2014-11-28 2015-07-29 安科智慧城市技术(中国)有限公司 Method and device for detecting specific target rapidly
CN104680554A (en) * 2015-01-08 2015-06-03 深圳大学 SURF-based compression tracing method and system
CN104680554B (en) * 2015-01-08 2017-10-31 深圳大学 Compression tracking and system based on SURF
CN105427337A (en) * 2015-10-30 2016-03-23 西北工业大学 Time-delay video sequence motor cell tracking method based on compression perception
CN106023257A (en) * 2016-05-26 2016-10-12 南京航空航天大学 Target tracking method based on rotor UAV platform
CN106023257B (en) * 2016-05-26 2018-10-12 南京航空航天大学 A kind of method for tracking target based on rotor wing unmanned aerial vehicle platform
CN106651912A (en) * 2016-11-21 2017-05-10 广东工业大学 Compressed sensing-based robust target tracking method
CN106815862A (en) * 2017-01-24 2017-06-09 武汉大学 A kind of target tracking algorism based on convolution contour feature
CN106815862B (en) * 2017-01-24 2020-03-10 武汉大学 Target tracking method based on convolution contour features

Also Published As

Publication number Publication date
CN103839273B (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN103839273A (en) Real-time detection tracking frame and tracking method based on compressed sensing feature selection
CN104537647B (en) A kind of object detection method and device
CN102903122B (en) Video object tracking method based on feature optical flow and online ensemble learning
CN109800628A (en) A kind of network structure and detection method for reinforcing SSD Small object pedestrian detection performance
CN105069472A (en) Vehicle detection method based on convolutional neural network self-adaption
CN105488456A (en) Adaptive rejection threshold adjustment subspace learning based human face detection method
CN103258213A (en) Vehicle model dynamic identification method used in intelligent transportation system
CN104361351B (en) A kind of diameter radar image sorting technique based on range statistics similarity
CN107025420A (en) The method and apparatus of Human bodys&#39; response in video
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN106295532B (en) A kind of human motion recognition method in video image
CN105426895A (en) Prominence detection method based on Markov model
CN106127161A (en) Fast target detection method based on cascade multilayer detector
CN101655914A (en) Training device, training method and detection method
CN104182985A (en) Remote sensing image change detection method
CN107808376A (en) A kind of detection method of raising one&#39;s hand based on deep learning
CN103971106A (en) Multi-view human facial image gender identification method and device
CN103593672A (en) Adaboost classifier on-line learning method and Adaboost classifier on-line learning system
CN106780552A (en) Anti-shelter target tracking based on regional area joint tracing detection study
CN105654516A (en) Method for detecting small moving object on ground on basis of satellite image with target significance
CN105046259A (en) Coronal mass ejection (CME) detection method based on multi-feature fusion
CN108734200A (en) Human body target visible detection method and device based on BING features
CN107092878A (en) It is a kind of based on hybrid classifer can autonomous learning multi-target detection method
Ibrahem et al. Real-time weakly supervised object detection using center-of-features localization
CN106250913A (en) A kind of combining classifiers licence plate recognition method based on local canonical correlation analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170222

Termination date: 20180325

CF01 Termination of patent right due to non-payment of annual fee