A kind of video target tracking method based on Grassmann manifolds and projection group
Technical field
The present invention relates to a kind of modeling methods, more particularly to a kind of video based on Grassmann manifolds and projection group
Method for tracking target.
Background technology
For video object significantly nonplanar attitudes vibration, the method for tracking target based on Euclidean space is often
The tracking of target is caused to deviate or fail.It is not main reason is that the characteristic of the apparent different postures in description target area is
It is present in some individual vector space, traditional linear vector space processing mode cannot be satisfied actual needs.It considers
Grassmann manifolds are a kind of entropy flow shapes in Lie Group Manifold, have the characteristic of distance between being more suitable for metric data point, together
When in view of the imaging process of image is substantially projective transformation process, that is, obey projective transformation group(SL(3)).
Invention content
The purpose of the present invention is to provide a kind of based on Grassmann manifolds and projects the video target tracking method of group,
This method is undergoing significantly nonplanar geometric deformation, illumination variation and experience partial occlusion to target, can
Realize that stable tracking, the geometric deformation model of the more accurate projective transformation design object of use devise bimodulus video mesh
Mark track algorithm.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of video target tracking method based on Grassmann manifolds and projection group, the method includes following procedure:
Step 1:Inputted video image sequence, totalframes k, original template size are m*n(Unit:Picture
Element);For first frame image, the target area of manually determined image, 8 dimensional vectors on projective transformation groupTo track the projective transformation parameter of boundary shape, t is present frame, for the
One frame image, t=1;
Step 2:It is predicted according to following formula, j=1,2 ... .L. L are sampling population;V is state by moment t-
1 moves to the velocity vector of moment t;
;
Step 3:Image-region corresponding to each sampling, calculates the feature basic matrix of corresponding apparent gray matrix,
Then according to formulaCalculate the weights of each sampling particle:
WhereinIt is characterized spaceFeature basic matrix, vectorIndicate t frames with
The target that track arrives.WithIt is two points in Grassmann manifolds, leading role's degree between the two is;
Step 4:Pass through formulaIt calculates in sampling and accumulates mean value, as target is estimated
Meter state;
Step 5:According to the more new strategy in target signature space, feature space is updated;More
New strategy is as follows:
The minimum value of the feature vector of present frame and each feature vector geodesic curve distance of feature space is ds, when ds is big
When given max-thresholds thresmax, illustrate that the frame image is seriously blocked or is distorted, is at this time guarantee feature space letter
The accuracy of breath does not update feature space;
Otherwise template set is updated in two kinds of situation:
(1)In current feature space vectorial number be less than prespecified quantity when, directly by the feature of present frame to
Amount is added in this feature spatial aggregation;
(2)Otherwise, the feature vector with maximum range value in feature space is replaced with the feature vector of present frame;
Step 6:T=t+1, ifOtherwise repetition step 2. tracks process and terminates.
Advantages of the present invention is with effect:
1. the present invention makes full use of the nonlinear characteristic in Grassmann manifolds space, the apparent variation of target is regarded as
The movement put in manifold makes full use of the intrinsic geometry characteristic of state space, designs particle filter algorithm, more acurrate can estimate
The apparent variation of target;
2. the present invention describes the geometric deformation process of target using SL (3) group.Compared with affine transformation, more accurately describe
The deformation process of target more accurately predicts the geometric deformation of target;
3. design effectively target signature spatial update strategy of the present invention, in the apparent feature space on-line study process of target
In, effectively shield exception information, it is ensured that the accuracy of feature space.
Description of the drawings
Fig. 1 shows the video frequency object tracking specific steps based on Grassmann manifolds and projection group;
Fig. 2 shows the results of the algorithm keeps track geometric deformation target;
Fig. 3 shows the result under the algorithm keeps track non-rigid illumination variation;
Fig. 4 shows the result of the algorithm keeps track experience partial occlusion target.
Specific implementation mode
The following describes the present invention in detail with reference to examples.
Embodiment 1:
Step 1:
Geometric deformation sequence of video images 1, totalframes 400 are inputted, original template size is 80*48(Unit:Pixel).
For first frame image, the target area of manually determined image, 8 dimensional vectors [0.03 on projective transformation group;0.01;0.01;
0.01;0.02;0.02;10;10] be tracking boundary shape projective transformation parameter, t is present frame, for first frame image, t=
1。
Step 2:
It is predicted according to following formula, j=1,2 ... .300. 300 is sampling population;V is state by moment t-1
Move to the velocity vector of moment t.
Step 3:
Image-region corresponding to each sampling, calculates the feature basic matrix of corresponding apparent gray matrix, then root
According to formulaCalculate the weights of each sampling particle:
WhereinIt is characterized spaceFeature basic matrix, vectorIndicate t frames
The target traced into.WithIt is two points in Grassmann manifolds, leading role's degree between the two is。
Step 4:
Pass through formulaIt calculates in sampling and accumulates mean value, the as estimated state of target。
Step 5:
According to the more new strategy in target signature space, feature space is updated.More new strategy is such as
Under:
The minimum value of the feature vector of present frame and each feature vector geodesic curve distance of feature space is ds, when ds is big
In given max-thresholdsWhen, illustrate that the frame image is seriously blocked or is distorted, is at this time guarantee feature-space information
Accuracy do not update feature space.Otherwise template set is updated in two kinds of situation:
(1)When vectorial number is less than 10 in current feature space, the feature vector of present frame is directly added to the spy
It levies in spatial aggregation.
(2)Otherwise, the feature vector with maximum range value in feature space is replaced with the feature vector of present frame.
Step 6:
T=t+1, ifOtherwise repetition step 2. tracks process and terminates.
Attached drawing 2 is the tracking result of the algorithm partial frame.
Embodiment 2:
Step 1:
Illumination variation sequence of video images 2, totalframes 500 are inputted, original template size is 78*62(Unit:Pixel).
For first frame image, the target area of manually determined image, 8 dimensional vectors [0.01 on projective transformation group;0.001;0.001;
0.03;0.01;0.01;10;10] it is the projective transformation parameter for tracking boundary shape.T is present frame, for first frame image, t=
1。
Step 2:
It is predicted according to following formula, j=1,2 ... .300. 300 is sampling population;V is state by moment t-1
Move to the velocity vector of moment t.
Step 3:
Image-region corresponding to each sampling, calculates the feature basic matrix of corresponding apparent gray matrix, then
According to formulaCalculate the weights of each sampling particle:
WhereinIt is characterized spaceFeature basic matrix, vectorIndicate t frames
The target traced into.WithIt is two points in Grassmann manifolds, leading role's degree between the two is。
Step 4:
Pass through formulaIt calculates in sampling and accumulates mean value, the as estimated state of target。
Step 5:
According to the more new strategy in target signature space, feature space is updated.More new strategy is such as
Under:
The minimum value of the feature vector of present frame and each feature vector geodesic curve distance of feature space is ds, when ds is big
In given max-thresholdsWhen, illustrate that the frame image is seriously blocked or is distorted, is at this time guarantee feature-space information
Accuracy do not update feature space.Otherwise template set is updated in two kinds of situation:
(1)When vectorial number is less than 10 in current feature space, the feature vector of present frame is directly added to the spy
It levies in spatial aggregation.
(2)Otherwise, the feature vector with maximum range value in feature space is replaced with the feature vector of present frame.
Step 6:
T=t+1, ifOtherwise repetition step 2. tracks process and terminates.
Attached drawing 3 is the tracking result of the algorithm partial frame.
Embodiment 3:
Step 1:
Sequence of video images 3, totalframes 100 are blocked in importation, and original template size is 112*52(Unit:Picture
Element).For first frame image, the target area of manually determined image, 8 dimensional vectors [0.05 on projective transformation group;0.002;
0.001;0.02;0.02;0.02;10;10] it is the projective transformation parameter for tracking boundary shape.T is present frame, for first frame
Image, t=1.
Step 2:
It is predicted according to following formula, j=1,2 ... .300. 300 is sampling population;V is state by moment t-1
Move to the velocity vector of moment t.
Step 3:
Image-region corresponding to each sampling, calculates the feature basic matrix of corresponding apparent gray matrix, then
According to formulaCalculate the weights of each sampling particle:
WhereinIt is characterized spaceFeature basic matrix, vectorIndicate t frames
The target traced into.WithIt is two points in Grassmann manifolds, leading role's degree between the two is。
Step 4:
Pass through formulaIt calculates in sampling and accumulates mean value, the as estimated state of target。
Step 5:
According to the more new strategy in target signature space, feature space is updated.More new strategy
It is as follows:
The minimum value of the feature vector of present frame and each feature vector geodesic curve distance of feature space is ds, when ds is big
In given max-thresholdsWhen, illustrate that the frame image is seriously blocked or is distorted, is at this time guarantee feature-space information
Accuracy do not update feature space.Otherwise template set is updated in two kinds of situation:
(1)When vectorial number is less than 10 in current feature space, the feature vector of present frame is directly added to the spy
It levies in spatial aggregation.
(2)Otherwise, the feature vector with maximum range value in feature space is replaced with the feature vector of present frame.
Step 6:
T=t+1, ifOtherwise repetition step 2. tracks process and terminates.
Attached drawing 4 is the tracking result of the algorithm partial frame.