CN102750550A - Multi-target tracking method and device based on video - Google Patents

Multi-target tracking method and device based on video Download PDF

Info

Publication number
CN102750550A
CN102750550A CN2012101989322A CN201210198932A CN102750550A CN 102750550 A CN102750550 A CN 102750550A CN 2012101989322 A CN2012101989322 A CN 2012101989322A CN 201210198932 A CN201210198932 A CN 201210198932A CN 102750550 A CN102750550 A CN 102750550A
Authority
CN
China
Prior art keywords
target
video
adaboost
steps
tracking method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012101989322A
Other languages
Chinese (zh)
Inventor
初红霞
王希凤
张鹏
韩晶
周强
聂相举
Original Assignee
初红霞
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 初红霞 filed Critical 初红霞
Priority to CN2012101989322A priority Critical patent/CN102750550A/en
Publication of CN102750550A publication Critical patent/CN102750550A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a multi-target tracking method and device based on video. The method comprises the following steps of: extracting a target module, initiating target parameters, detecting Adaboost, carrying out dynamic prediction on a particle set according to a motion model, updating a weight value of each mixing component, updating a motion state of each target, and updating the module. A multi-mode problem in multi-target tracking can be effectively solved, the effectiveness of calculation is maintained, the multi-target tracking can be realized, and simultaneously, the requirement of instantaneity can be ensured.

Description

Multi-object tracking method and device based on video
Technical field
The present invention relates to computer vision and pattern analysis field, particularly a kind of multi-object tracking method and device based on video.
Background technology
Along with the development of video technique, video frequency object tracking becomes popular research topic.It is a main direction in computer vision research field, is the basis that advanced videos such as behavior identification, intelligent video monitoring, human motion analysis are used.
Target following is not a simple question; Because sensation target itself and surrounding enviroment are complicated and changeable; Such as target can receive illumination, block, close background interference, staggered between the target, and the influence of disturbing factor such as the outward appearance of target itself, attitude, form, motion be irregular; Finally cause following the tracks of failure or having than large deviation, therefore setting up stable tracker is to have challenging research topic in the computer vision.And wherein has bigger difficulty for the foundation of multiple-target system.
Face two subject matters during multiple target tracking.At first observation model and Target Assignment are highly non-linear non-Gausses, and secondly, a large amount of tracking targets that change exist because overlapping and uncertain and generation complicated interaction.Under the multiple goal complex environment, follow the tracks of visual signature and face many uncertainties mostly as a whole.
In order to catch the uncertainty of these factors comprehensively; Usually require algorithm to have the multi-mode search capability; Particle filter is as a kind of approximation method of common application; Itself can realize the multi-mode search, and can in global space, find optimal result, therefore the search strategy than monotype has better robustness.
But particle filter is not enough below existing again in the multi-mode search: at first it is more weak on the various modes of keeping Target Assignment continuously.Secondly, with noise or be uncertain when deriving from multiple goal, this causes the various modes growth to dbjective state as the measurement of deficiency.In the particle filter of reality was realized, all particles often took place and merge to a kind of pattern fast, and abandoned every other pattern in the target that will follow the tracks of all existence in such cases, cause particle sampler barren too early, can not realize multiobject tracking.
In order to handle these deficiencies, essential method is exactly to propose novel particle filter.Require novel particle filter can handle the various modes problem well.Calculated amount is little simultaneously, and the validity of calculating is eager to excel.But any simple any particle filter is all not enough to the multiple target tracking processing power of number of variations.
Summary of the invention
To the problem that prior art exists, the present invention aims to provide multi-object tracking method and the device based on video that the advantage that on the basis of novel particle filter, combines Adaboost to detect makes up a kind of ability study, detection and tracking interesting target.
First aspect present invention provides a kind of multi-object tracking method based on video; Its particle filter that distributes formula to change into a mixing multiple goal distributes; Go on foot the novel particle filter of Recursive Implementation by the Monte Carlo through predicting and upgrading two; Again novel particle filter and Adaboost are detected fusion constructs multiple target tracking device, comprise the steps:
The parameter of A, extraction To Template and initialization target;
B, Adaboost detect;
C, according to motion model, the particle collection is carried out performance prediction;
D, the weights of each mixed components are upgraded;
E, upgrade the motion state of each target;
F, template renewal;
G, end.
Preferably, said steps A specifically comprises the steps:
A1, obtain tracing area;
The motion state of A2, initialization target and particle assembly.
Preferably, said step B specifically comprises the steps:
B1, detect through cascade Adaboost detecting device and whether to have fresh target to occur, and the Gaussian distribution that detects to the center in order to Adaboost generates particle.
B2, from Adaboost detects the abstract image piece, initialization exchange probability principal component analysis (PCA) template renewal device.
Preferably, dynamic model described in the said step C uses Chang Su and random walk to mix the processing that dynamic model adapts to motion, rotation, target sizes variation and blocks mutually.
Preferably, said step D specifically comprises the steps:
D1, mixing Bayes sequential filtering process;
D2, novel particle filter tracker right value update;
Choosing of D3, suggestion density;
The calculating of D4, observation likelihood.
Preferably, said step e specifically comprises the steps:
E1, resample, produce not weighted sample according to importance weight.
E2, unweighted sample is averaged.
E3, resampling.
Preferably, said step F specifically comprises study, upgrades and predicts three key steps.
Second aspect present invention provides a kind of multiple target tracking device based on video, comprises the target deriving means, object initialization device, Adaboost pick-up unit, particle filter tracker device, template renewal apparatus.
A kind of multi-object tracking method and device provided by the invention based on video; Its particle filter that distributes formula to change into a mixing multiple goal distributes; Go on foot the novel particle filter of Recursive Implementation by the Monte Carlo through predicting and upgrading two; Again novel particle filter and Adaboost are detected fusion constructs multiple target tracking device, have following beneficial effect:
1, novel particle filter can effectively be handled multi-mode problem in the multiple target tracking, and only in the calculating of mixed weight-value, this just makes computation amount to the interaction between stuff and other stuff, and has kept the validity of calculating.When realizing multiple target tracking, also can guarantee the requirement of real-time.
2, novel particle filter and Adaboost detect fusion constructs multiple target tracking device has two important feature: at first, it uses Adaboost to rebuild the suggestion density function, has improved the robustness of algorithm.The algorithm performance that merges the suggestion density function of up-to-date observation is compared with the suggestion density of only using the transfer prior density, has good superiority.The second, Adaboost provides a framework that obtains and keep mixing description.Particularly, it can detect target effectively and leave and get into scene.The final multiple target tracking of realizing number of variations.
Description of drawings
Fig. 1 is the process flow diagram of a kind of multi-object tracking method based on video of the present invention;
Fig. 2 is the process flow diagram that extracts the parameter of To Template and initialization target among Fig. 1;
The process flow diagram of Fig. 3 for the weights of each mixed components being upgraded among Fig. 1;
Fig. 4 is the likelihood distribution curve comparison diagram of suggestion density function among Fig. 3;
Fig. 5 is the process flow diagram that upgrades the motion state of each target among Fig. 1;
Fig. 6 is the structural representation of a kind of multiple target tracking device based on video of the present invention.
Embodiment
Further specify technical scheme of the present invention below in conjunction with accompanying drawing and through embodiment:
Please with reference to Fig. 1; The present invention provides a kind of multi-object tracking method based on video; Its particle filter that distributes formula to change into a mixing multiple goal distributes; Through predicting and upgrading the novel particle filter of two step Recursive Implementation, more novel particle filter and Adaboost are detected fusion constructs multiple target tracking device by the Monte Carlo, comprise the steps:
The parameter of A, extraction To Template and initialization target;
B, Adaboost detect;
C, according to motion model, the particle collection is carried out performance prediction;
D, the weights of each mixed components are upgraded;
E, upgrade the motion state of each target;
F, template renewal;
G, end.
Said steps A mainly may further comprise the steps:
A1, obtain tracing area;
The motion state of A2, initialization target and particle assembly.
Said step B mainly comprises the steps:
B1, detect through cascade Adaboost detecting device and whether to have fresh target to occur, and the Gaussian distribution that detects to the center in order to Adaboost generates particle.
B2, from Adaboost detects the abstract image piece, initialization exchange probability principal component analysis (PCA) template renewal device.
Dynamic model described in the said step C uses Chang Su and random walk to mix the processing that dynamic model adapts to motion, rotation, target sizes variation and blocks mutually.
Said step D specifically comprises the steps:
D1, mixing Bayes sequential filtering process;
D2, novel particle filter tracker right value update;
Choosing of D3, suggestion density;
The calculating of D4, observation likelihood.
Said step e specifically comprises the steps:
E1, resample, produce not weighted sample according to importance weight.
E2, unweighted sample is averaged.
E3, resampling.
Said step F specifically comprises study, upgrades and predicts three key steps.
Concrete implementation is following:
Please refer to Fig. 2, the parameter of steps A, extraction To Template and initialization target:
Wherein, said steps A 1, obtain tracing area and be specially:
Tracking target can adopt interactive means appointed area in video, also can obtain tracing area automatically according to the moving object detection of video.Hypothetical target is a center for (x, y), length and width are respectively (sx, rectangular area sy), but do not limit the shape of rectangular area.Set up the clarification of objective template according to these prioris.
The state parameter of initialization target is: the state vector of target is represented by a six-vector: it is turned to by parameter:
Figure BSA00000735759200061
wherein; X and y represent the center-of-mass coordinate of rectangle;
Figure BSA00000735759200062
and
Figure BSA00000735759200063
is the speed of target along x and y, and Sx and Sy are target width and height.
The motion state and the particle collection of said steps A 2, initialization target are specially:
With N sample set
Figure BSA00000735759200064
Expression P (X0), wherein X m , 0 ( i ) ~ q ( X 0 ) , w m , 0 ( i ) = 1 / N , M=0.
It is particularly important how to obtain, propagate and keep new hybrid representation in the novel particle filter, in the ideal case, for each pattern in target distribution a mixing portion will be arranged.Yet in fact the quantity of pattern is to know seldom in advance in target distribution.And the quantity of pattern can not keep immobilizing, maybe be because uncertain the growth, be decomposed or target appearing and subsiding and fluctuating.Therefore to consider that these fluctuations must constantly recomputate hybrid representation.With (C t, M)=F (X t, C t, M) space process of reconstruction of expression as input particle or current hybrid representation, is rebuild function F and is used initial sample set P (X 0) the k mean cluster obtain initial hybrid representation, hybrid representation recomputates those because separate the particle collection become and to have disperseed through merging obvious lap at every turn in iterating.
Step B, Adaboost detect:
Step B1 is specially: detect target through cascade Adaboost detecting device, if M is arranged NewIndividual fresh target occurs, then for m from 1 to M New, i.e. M=M+M New, the Gaussian distribution that detects to the center through Adaboost produces N particle
Figure BSA00000735759200072
Step B2: abstract image piece y from Adaboost detects M, tInitialization SPPCA template renewal device.
Step C, according to motion model, the particle collection is carried out performance prediction:
Hybrid dynamic model is described as:
Figure BSA00000735759200073
where
Figure BSA00000735759200074
zero-mean Gaussian white noise.
Please refer to Fig. 3, step D, the weights of each mixed components upgraded:
Please, detect the suggestion density that merges mutually with particle filter according to Adaboost with reference to Fig. 3 and Fig. 4.
In the right value update step of novel particle filter, comprise mix Bayes's sequential filtering process D1, novel particle filter tracker right value update D2, suggestion density choose D3, and the several committed steps of calculating D4 of observation likelihood.
Wherein mixing Bayes's sequential filtering D1 is specially:
Suppose x tThe state vector of expression target, y 1:t=(y 1Y t) be observation vector up to time t.The distribution of target is exactly to realize filtering distribution p (x for tracking problem t| y 1:t).In Bayes's sequence estimation, this distribution uses two step autoregressions to calculate:
Prediction: p (x t| y 1:t-1)=∫ D (x t| x T-1) p (dx T-1| y 1:t-1) (1)
Upgrade: p ( x t | y 1 : t ) = L ( y t | x t ) p ( x t | y 1 : t - 1 ) ∫ L ( y t | S t ) p ( Ds t | y 1 : t - 1 ) - - - ( 2 )
Wherein, prediction distribution is followed marginalisation and is distributed, and it is the direct result of Bayes rule that new filtering distributes.
Autoregression requires the dynamic model D (x of concrete description state transitions t| x T-1) and according to current observation L (y t| x t) provide the model of free position likelihood, and according to some initial distribution p (x 0) come initialization.
In order to follow the tracks of multiple goal, filtering is resolved into the imparametrization mixture model of M component, its posteriority distribution p (x t| y 1:t) be:
p ( x t | y 1 : t ) = Σ m = 1 M π m , t p m ( x t | y 1 : t ) - - - ( 3 )
Wherein
Figure BSA00000735759200083
is assumed that individual mixed components for non-parametric model.The autoregression of imparametrization mixture model under same form upgraded derivation and is divided into prediction and upgrades the realization of two steps.
Suppose mixed filtering distribution p (x T-1| y 1:t-1) known, new prediction distribution substitution formula (1), derive:
p ( x t | y 1 : t - 1 ) = Σ m = 1 M π m , t - 1 ∫ D ( x t | x t - 1 ) p m ( dx t - 1 | y 1 : t - 1 )
= Σ m = 1 M π m , t - 1 p m ( x t | y 1 : t - 1 ) - - - ( 4 )
P wherein m(x t| y 1:t-1)=D (x t| x T-1) p m(dx T-1| y 1:t-1) be the prediction distribution of m component.Therefore new prediction distribution directly obtains through the prediction distribution of each independent component, on the basis that keeps the original component weights, merges these distributions then.
Obtain new filtering and distribute, new prediction distribution substitution formula (2), derive:
p ( x t | y 1 : t ) = Σ m = 1 M π m , t - 1 L ( y t | x t ) p m ( x t | y 1 : t - 1 ) Σ n = 1 M π n , t - 1 ∫ L ( y t | s t ) p n ( ds t | y 1 : t - 1 )
= Σ m = 1 M [ π m , t - 1 ∫ L ( y t | s t ) p m ( ds t | y 1 : t - 1 ) Σ n = 1 M π n , t - 1 ∫ L ( y t | s t ) p n ( ds t | y 1 : t - 1 ) ] × [ L ( y t | x t ) p m ( x t | y 1 : t - 1 ) ∫ L ( y t | s t ) p m ( ds t | y 1 : t - 1 ) ]
= Σ m = 1 M π m , t p m ( x t | y 1 : t ) - - - ( 5 )
Second the new filtering that can regard m component as in the second row bracket distributes.That is:
p m ( x t | y 1 : t ) = L ( y t | x t ) p m ( x t | y 1 : t - 1 ) ∫ L ( y t | s t ) p m ( ds t | y 1 : t - 1 ) .
First is state vector x in the bracket tBe independently, its new weights are:
π m , t = π m , t - 1 ∫ L ( y t | s t ) p m ( ds t | y 1 : t - 1 ) Σ n = 1 M π n , t - 1 ∫ L ( y t | s t ) p n ( ds t | y 1 : t - 1 ) = × π m , t - 1 p m ( y t | y 1 : t - 1 ) Σ n = 1 M π n , t - 1 p n ( y t | y 1 : t - 1 ) - - - ( 6 )
Formula (3) shows that it is again the mixing that each individual component filtering distributes that new filtering distributes.Promptly, the weights that mix just can obtain correct target distribution as long as upgrading according to formula (6).New weights are standardized component weights likelihood functions, and M different likelihood distribution { p promptly arranged m(y 1:t| x t) m=1...M, when one or more fresh targets appeared in the scene, these likelihoods distributed and can obtain through detection or observation model auto-initiation.
Step D2, novel particle filter tracker right value update are specially:
Following is the derivation of detailed novel particle filter tracker right value update recurrence.
Suppose P t={ N, M, Π t, X t, W t, C tMixed filtering distributes in the representation formula (3) particle describes.N is the number of particle; M is that the number of mixed components is the multiple goal number;
Figure BSA00000735759200096
is the weights of mixed components;
Figure BSA00000735759200097
representes particle state; expression particle weights; The mark of
Figure BSA00000735759200099
expression component; Promptly
Figure BSA000007357592000910
if particle i belongs to m mixed components, and then the Monte Carlo that distributes of
Figure BSA000007357592000911
stuff and other stuff filtering is approximately:
p ‾ ( x t | y t ) = Σ m = 1 M π m , t Σ i ∈ I m w t ( i ) δ x t i ( x t ) - - - ( 7 )
δ wherein a() is that dirac measures,
Figure BSA000007357592000913
It is the particle indexed set that particle belongs to m mixed components.The weights of mixed components are 1 with each particle weights summation, promptly
Figure BSA00000735759200101
With Σ i ∈ I m w t ( i ) = 1 , m = 1 . . . M .
According to p (x T-1| y 1:t-1) the likelihood given particle collection P that distributes T-1, purpose is to calculate new particle collection P tMake it from p (x t| y 1:t) middle sample.By knowing that each mixed components is independently launched in the general combined tracking recurrence of a last joint, the interaction of mixed components is only calculated through mixed weight-value.Mixed components is also used identical method representation particle for particle filter.With regard to m component, sample
Figure BSA00000735759200103
Be from p m(x T-1| y 1:t-1) the weights sample set that obtains.New sample will produce through suitably selecting suggestion to distribute; And the selection that suggestion distributes and old state and new measurement are relevant; Promptly
Figure BSA00000735759200104
in order to keep weighted sample collection rightly, new particle weight setting is:
w t ( i ) = w ~ t ( i ) Σ J ∈ I m w ~ t ( J ) , w ~ t ( i ) = w t - 1 ( i ) L ( y t | x t ( i ) ) D ( x t ( i ) | x t - 1 ( i ) ) q ( x t ( i ) | x t - 1 ( i ) , y t ) - - - ( 8 )
Stuff and other stuff filtering begins to carry out from the prior density sampling, and deriving through formula (8) obtains more new particle of importance weight.
Step D3, choosing of density of suggestion are specially:
The process of choosing of suggestion density
Figure BSA00000735759200106
is: stuff and other stuff filtering merges the Adaboost detection in suggestion density framework in right value update.The expression formula of suggestion density function is following:
q ( x t ( i ) | x t - 1 ( i ) , y t ) = α ada q ada ( x m , t ( i ) | y t ) + ( 1 - α ada ) D ( x t ( i ) | x t - 1 ( i ) ) - - - ( 9 )
Wherein, q AdaBe to be the Gaussian distribution at center with the Adaboost detection, variance is changeless.Parameter alpha AdaCan dynamically set and not influence the convergence of particle filter.
Wherein likelihood Distribution calculation 244 is specially:
The calculating of step D4, observation likelihood is specially:
The calculation procedure that said likelihood distributes is:
(1) color model
Because HSV isolates brightness from color (being color (Hue) and purity (Saturation)) (be lightness (Value); So its relative RGB color histogram is to illumination effect and insensitive; Therefore adopt based on the histogrammic color observation model of hsv color, please with reference to Fig. 4.The equal number of subspace distribute to(for) each color component.Be N h=N s=V vThe HSV histogram is by N=N hN s+ N vIndividual histogram is formed, N h, N s, N vBe respectively form and aspect, purity and lightness subspace number.Use b k(u) ∈ 1 ..., N} representes and at the color vector y of k frame position u kThe histogram index of (u) being correlated with.
Suppose R (x t) be the center in the position x tTwo-dimensional rectangle zone.Then at x tCandidate region R (the x of position t) color histogram of a l model is by Q (x t)={ q l(n; x t) N=1 ... NExpression, wherein:
q l ( n ; x t ) = C l Σ k ∈ R ( x t ) δ [ b t ( u ) - n ]
Σ n = 1 N q l ( n ; x t ) = 1 - - - ( 10 )
δ is that the Crow Buddhist nun restrains function, C in the formula lBe to make
Figure BSA00000735759200113
Generalized constant, u is region R (x t) interior any pixel.Through standardization color histogram Q (x t) be a discrete probability distribution.Based on the tracking of color be search with the reference target model in the candidate region that belongs to of the most similar histogram.The object module of l model is:
Q *=p l(n;x 0) n=1...N
Figure BSA00000735759200114
Represent that with the Bhattacharyya coefficient two similaritys between the single histogram measure:
ρ [ Q ( x t ) , Q * ] = Σ n = 1 N p ( n ; x 0 ) q ( n ; x t ) - - - ( 12 )
Distance table between two histograms is shown:
d ( x t , x 0 ) = 1 - ρ [ Q ( x t ) , Q * ] - - - ( 13 )
Obtain about after the color histogram distance B, the definition similarity is distributed as:
p hsv ( y t | x t ) ∝ ex p - λ d 2 ( x t , x 0 ) - - - ( 14 )
Get empirical value λ=20 in the experiment.
(2) edge orientation histogram
Edge orientation histogram is a kind of statistical nature that is used for describing picture shape, edge and texture, and the edge and the texture information of reflection target that can be concrete can be with reference to Fig. 4.It is shape, edge and a texture features of describing target through the edge of statistical objects and texture information.Under the tracking situation, the characteristics of edge orientation histogram are following: a: it comprises the profile information of target, has embodied the structural information of target.Use edge statistics can tackle the variation of field color and illumination certainly rightly;
B: this characteristic under background interference and the illumination variation, has robustness preferably at partial occlusion;
C: two dimension moved with convergent-divergent have unchangeability.These characteristics of edge orientation histogram can be used to remedy the deficiency of solid color information.
The computation process of edge orientation histogram observation likelihood is: the two-value edge point diagram that at first adopts Canny boundary operator computed image; Calculate the edge direction gradient of each marginal point in the binary map again, the proportion that counts each edge direction at last obtains its histogram.(i is that histogrammic horizontal ordinate generates histogram H j) with θ uThe edge orientation histogram expression formula is:
H u = Σ i , j A ( i , j ) δ [ θ ( i , j ) - u ] - - - ( 15 )
In the formula, δ [ x - u ] = 1 x = u 0 x ≠ u .
Wherein (i j) is gradient magnitude to A, and (i j) is gradient direction to θ.(i, span j) is divided into the u equal portions and quantizes with θ.The value of u is big more, and calculated amount is big more, and computational accuracy is also high more simultaneously.
With after the histogram H normalization as a distribution of probability density, and suppose that it has m gradient direction interval, then the definition edge orientation histogram object module expression formula that is used to follow the tracks of is:
H ^ = { H ^ u } u = 1 . . . m , H ^ u = C Σ i = 1 n k ( | | x i * | | 2 ) δ [ b ( x i * ) - u ] - - - ( 16 )
The candidate target model:
H ^ * = { H ^ u * } u = 1 . . . m H ^ u * = C h Σ i = 1 n h k ( | | y - x i h | | 2 ) δ [ b ( x i * ) - u ] - - - ( 17 )
C and C hThe same C of definition l
Order
Figure BSA00000735759200127
Expression H and H *Similarity, then the distance between two edge orientation histograms is: D ( y ) = 1 - H [ H ^ ( y ) , H ^ * ] - - - ( 18 )
Obtain about after the edge orientation histogram distance B, the definition similarity is distributed as:
P EOH ( y t | x t ) ∝ Exp - λ D 2 [ H ^ ( y ) , H ^ * ] (19) finally obtain Fusion of Color through the weighting rule and be with the observation likelihood function of shape:
L ( y t | x t i ) ∝ P hsv ( y t | x t i ) P EOH ( y t | x t i ) - - - ( 20 )
Please refer to Fig. 5, step e, upgrade the motion state of each target:
Step E1: According importance weights
Figure BSA000007357592001211
resampling
Figure BSA000007357592001212
generate unweighted sample
Step e 2, unweighted sample is averaged:
Figure BSA000007357592001214
E3, resampling:
For fear of the degeneration of weights however the time the resampling particle.The standard weight sampling is insensitive to particle position, and this just causes target in distribution, to be lost.Because method with mixed model allows each mixed components independently to resample according to component particle weights among the present invention, reservation posteriority that so just can be very natural distributes.Follow this process, new particle weights become
Figure BSA00000735759200131
Step F, template renewal:
(A) to
Figure BSA00000735759200132
is the center of the image block extraction
Figure BSA00000735759200133
(2) SPPCA template renewal device U mThrough
Figure BSA00000735759200134
Use RBPF to upgrade { z M, t, s M, t.
Step G: finish
When the AdaBoost degree of belief is lower than certain given threshold values, remove M 1Individual target., two targets merge M when overlapping each other 2To target, this has M=M-M 1-M 2
Please refer to Fig. 6, the target tracker based on video provided by the invention mainly comprises with the lower part:
Object initialization device 10 is set up the color histogram template of target, edge orientation histogram template and unique point template, and the motion state of initialization target and particle assembly respectively.
1) the Adaboost detector means 20, i.e. the target deriving means.Mainly form by three parts: one of which, by color model and edge orientation histogram character representation target, and realize the quick calculating of character numerical value with integrogram; Its two, pick out the rectangular characteristic that some can represent target by the AdaBoost algorithm, i.e. Weak Classifier removes to construct a strong classifier according to the mode of weighting ballot through Weak Classifier then; Its three, some strong classifiers series connection that is obtained by training constitutes the range upon range of sorter of a cascade structure.
2) novel particle filter apparatus 30, comprising: dbjective state prediction, right value update, resampling.
Whether 3) the dbjective state updating device 40, confirm the position of target particle assembly to be resampled with weighted criterion, and judge to follow the tracks of and finish.If follow the tracks of to finish then withdraw from, if do not finish then next frame is proceeded to handle.
4) the template renewal apparatus 50, with exchanging probability principal component analysis (PCA) template renewal device through study, prediction and the realization template renewal of more newly arriving.
A kind of multi-object tracking method and device provided by the invention based on video; Its particle filter that distributes formula to change into a mixing multiple goal distributes; Go on foot the novel particle filter of Recursive Implementation by the Monte Carlo through predicting and upgrading two; Again novel particle filter and Adaboost are detected fusion constructs multiple target tracking device, have following beneficial effect:
1, novel particle filter can effectively be handled multi-mode problem in the multiple target tracking, and only in the calculating of mixed weight-value, this just makes computation amount to the interaction between stuff and other stuff, and has kept the validity of calculating.When realizing multiple target tracking, also can guarantee the requirement of real-time.
2, novel particle filter and Adaboost detect fusion constructs multiple target tracking device has two important feature: at first, it uses Adaboost to rebuild the suggestion density function, has improved the robustness of algorithm.The algorithm performance that merges the suggestion density function of up-to-date observation is compared with the suggestion density of only using the transfer prior density, has good superiority.The second, Adaboost provides a framework that obtains and keep mixing description.Particularly, it can detect target effectively and leave and get into scene.The final multiple target tracking of realizing number of variations.
Combine accompanying drawing that the present invention has been carried out exemplary description above; Obvious realization of the present invention does not receive the restriction of aforesaid way; As long as the various improvement of having adopted technical scheme of the present invention to carry out; Or design of the present invention and technical scheme are directly applied to other occasion without improving, all in protection scope of the present invention.

Claims (8)

1. multi-object tracking method based on video; Its particle filter that distributes formula to change into a mixing multiple goal distributes; Go on foot the novel particle filter of Recursive Implementation by the Monte Carlo through predicting and upgrading two; Again novel particle filter and Adaboost are detected fusion constructs multiple target tracking device, it is characterized in that, comprise the steps:
The parameter of A, extraction To Template and initialization target;
B, Adaboost detect;
C, according to motion model, the particle collection is carried out performance prediction;
D, the weights of each mixed components are upgraded;
E, upgrade the motion state of each target;
F, template renewal;
G, end.
2. the multi-object tracking method based on video according to claim 1 is characterized in that said steps A specifically comprises the steps:
A1, obtain tracing area;
The motion state of A2, initialization target and particle assembly.
3. according to right 1 described multi-object tracking method, it is characterized in that said step B specifically comprises the steps: based on video
B1, detect through cascade Adaboost detecting device and whether to have fresh target to occur, and the Gaussian distribution that detects to the center in order to Adaboost generates particle.
B2, from Adaboost detects the abstract image piece, initialization exchange probability principal component analysis (PCA) template renewal device.
4. according to right 1 described multi-object tracking method, it is characterized in that dynamic model described in the said step C uses Chang Su and random walk to mix the processing that dynamic model adapts to motion, rotation, target sizes variation and blocks mutually based on video.
5. according to right 1 described multi-object tracking method, it is characterized in that said step D specifically comprises the steps: based on video
D1, mixing Bayes sequential filtering process;
D2, novel particle filter tracker right value update;
Choosing of D3, suggestion density;
The calculating of D4, observation likelihood.
6. according to right 1 described multi-object tracking method, it is characterized in that said step e specifically comprises the steps: based on video
E1, resample, produce not weighted sample according to importance weight.
E2, unweighted sample is averaged.
E3, resampling.
7. according to right 1 described multi-object tracking method, it is characterized in that said step F specifically comprises study, upgrades and predict three key steps based on video.
8. the multiple target tracking device based on video is characterized in that, comprises the target deriving means, object initialization device, Adaboost pick-up unit, particle filter tracker device, template renewal apparatus.
CN2012101989322A 2012-06-06 2012-06-06 Multi-target tracking method and device based on video Pending CN102750550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012101989322A CN102750550A (en) 2012-06-06 2012-06-06 Multi-target tracking method and device based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012101989322A CN102750550A (en) 2012-06-06 2012-06-06 Multi-target tracking method and device based on video

Publications (1)

Publication Number Publication Date
CN102750550A true CN102750550A (en) 2012-10-24

Family

ID=47030715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012101989322A Pending CN102750550A (en) 2012-06-06 2012-06-06 Multi-target tracking method and device based on video

Country Status (1)

Country Link
CN (1) CN102750550A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020986A (en) * 2012-11-26 2013-04-03 哈尔滨工程大学 Method for tracking moving object
CN104574446A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Method for extracting pedestrians from video on basis of joint detection and tracking
CN106296734A (en) * 2016-08-05 2017-01-04 合肥工业大学 Based on extreme learning machine and the target tracking algorism of boosting Multiple Kernel Learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
初红霞等: ""Multi-feature integration kernel particle filtering target tracking"", 《JOURNAL OF HARBIN INSTITUTE OF TECHNOLOGY》 *
初红霞等: ""多特征融合的退火粒子滤波目标跟踪"", 《计算机工程与应用》 *
李安平: ""复杂环境下的视频目标跟踪算法研究"", 《上海交通大学博士学位论文》 *
魏武等: ""融合多模型的粒子滤波运动目标实时跟踪算法"", 《公路交通科技》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020986A (en) * 2012-11-26 2013-04-03 哈尔滨工程大学 Method for tracking moving object
CN103020986B (en) * 2012-11-26 2016-05-04 哈尔滨工程大学 A kind of motion target tracking method
CN104574446A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Method for extracting pedestrians from video on basis of joint detection and tracking
CN106296734A (en) * 2016-08-05 2017-01-04 合肥工业大学 Based on extreme learning machine and the target tracking algorism of boosting Multiple Kernel Learning
CN106296734B (en) * 2016-08-05 2018-08-28 合肥工业大学 Method for tracking target based on extreme learning machine and boosting Multiple Kernel Learnings

Similar Documents

Publication Publication Date Title
CN101673403B (en) Target following method in complex interference scene
Gerónimo et al. 2D–3D-based on-board pedestrian detection system
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN104200485A (en) Video-monitoring-oriented human body tracking method
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
US20060067562A1 (en) Detection of moving objects in a video
CN102789568A (en) Gesture identification method based on depth information
CN101308607A (en) Moving target tracking method by multiple features integration under traffic environment based on video
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN105335701A (en) Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion
CN102663778B (en) A kind of method for tracking target based on multi-view point video and system
CN105160355A (en) Remote sensing image change detection method based on region correlation and visual words
CN102063625B (en) Improved particle filtering method for multi-target tracking under multiple viewing angles
CN102142085A (en) Robust tracking method for moving flame target in forest region monitoring video
Shen et al. Adaptive pedestrian tracking via patch-based features and spatial–temporal similarity measurement
CN117994987B (en) Traffic parameter extraction method and related device based on target detection technology
Yuan et al. Using local saliency for object tracking with particle filters
EP2860661A1 (en) Mean shift tracking method
CN105046721A (en) Camshift algorithm for tracking centroid correction model on the basis of Grabcut and LBP (Local Binary Pattern)
Wei et al. A robust approach for multiple vehicles tracking using layered particle filter
Xia et al. Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
CN102750550A (en) Multi-target tracking method and device based on video
Chen et al. Visual tracking with generative template model based on riemannian manifold of covariances
CN110349184A (en) The more pedestrian tracting methods differentiated based on iterative filtering and observation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121024