CN101369346A - Tracing method for video movement objective self-adapting window - Google Patents

Tracing method for video movement objective self-adapting window Download PDF

Info

Publication number
CN101369346A
CN101369346A CNA2007101202284A CN200710120228A CN101369346A CN 101369346 A CN101369346 A CN 101369346A CN A2007101202284 A CNA2007101202284 A CN A2007101202284A CN 200710120228 A CN200710120228 A CN 200710120228A CN 101369346 A CN101369346 A CN 101369346A
Authority
CN
China
Prior art keywords
rightarrow
target
theta
tracked
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007101202284A
Other languages
Chinese (zh)
Other versions
CN101369346B (en
Inventor
王睿
王原野
王林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN2007101202284A priority Critical patent/CN101369346B/en
Publication of CN101369346A publication Critical patent/CN101369346A/en
Application granted granted Critical
Publication of CN101369346B publication Critical patent/CN101369346B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a tracking method of a video frequency moving object target self-adapting window, relating to machine vision and pattern recognition technique. A frame image of a video frequency sequence is read into a postpositional memory zone, and staring position and size information of a tracked object is obtained in the frame, then distribution statistical information of a target signature is extracted to build a gauss mixed model as an object template, a mean vector and a covariance matrix in gauss mixed distribution are used to describe the position and the size of the object, then the next frame image of the video frequency sequence is read into the postpositional memory zone. In a new frame video frequency image, a parameter estimation method is used to obtain object gauss mized model parameters in the current frame in iterative computation and find candidate template similar to the object template, and the final model parameters obtained in iteration are used to update the tracking window for realizing self-adaption of the tracked window. According to the tracking method of the video frequency moving object target self-adapting window, tracking reliability is greatly increased, which is widely applied in the fields of robot technology, vision navigation, automatic supervision, traffic control and the like.

Description

A kind of tracking of video movement objective self-adapting window
Technical field
The present invention relates to machine vision and mode identification technology, particularly a kind of tracking of video movement objective self-adapting window.
Background technology
In the application of numerous areas such as supervision automatically, traffic administration, vision guided navigation, Robotics, it is one of hot issue of machine vision and area of pattern recognition research that video frequency motion target is followed the tracks of, its key problem is: when relative video camera of tracked target or camera motion, how to represent the problem of the continuous correspondence of target area in every two field picture of video sequence.But this problem can solve by being converted into the problem that the target area in the successive frame is mated.
In present existing coupling tracking, there is a kind of method to be called mean shift (mean-shift) method, this method is meant with the histogram under the kernel function weighting describes image object, utilize the similarity of similarity function tolerance initial frame object module and present frame candidate model, obtain iterative process about the mean shift vector of target by asking the similarity function maximum.It is as a kind of high performance pattern matching algorithm, owing to do not need to carry out exhaustive search, and this method with its outstanding search efficiency, need not the initialization procedure of parameter and the i.e. stability of robust (Robust) property that the edge is blocked, taken into account the requirement of target tracking algorism preferably, successfully be applied in the target tracking domain that real-time is had relatively high expectations real-time and robustness.In the mean shift track algorithm, the size of the window width of kernel function plays important effect, because it has not only determined to participate in the sample size of mean shift iteration, and has reflected the size of following the tracks of window.The window width of described kernel function is meant the size of candidate target model area.Its size plays important effect to the success or not of following the tracks of, because it has not only determined to participate in the sample size of mean shift iteration, and has reflected the size of following the tracks of window.
Yet; the window width that a defective of mean shift method is a kernel function can not change with the variation of target size size; make when target when change in size is bigger in image; the target following window can not be described target size and position exactly; thereby caused tracking error, usually can cause losing of tracking target.
The improvement algorithm and the technology of relevant adaptive tracing window, at publication number is existing disclosure the in the application documents of CN1619593A, but only be applicable to the video sequence that colour or multiple image sensor are synthetic, to having relatively high expectations of imageing sensor, and need carry out complicated similarity to the genetic algorithm that is adopted calculates, computing cost is big, is unfavorable for realizing the real-time follow-up to target; The improvement algorithm of the adaptive tracing window of relevant mean shift method, in publication number is the application documents of CN1794264A, disclosure is arranged also, the tracking that this patent adopts is the inaccurate description to tracked target window width size, when target size changes in video sequence, this tracking mode can not accurately be estimated target size, along with proceeding of following the tracks of, cause following the tracks of failure easily.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a kind of tracking of video movement objective self-adapting window, realize to follow the tracks of window, improve the stability and the accuracy of tracking effect according to moving target position and size and adjust in real time adaptively in video sequence frame.
For achieving the above object, technical scheme of the present invention is achieved in that
A kind of tracking of video movement objective self-adapting window, this method comprises the steps:
A, read in video sequence, the moving target to be tracked in the video sequence frame is detected, obtain the initial center position of this target And original dimension V 0Information;
The initial center position that b, utilization obtain from step a And original dimension V 0Information, and extract the distribution statistics information of moving target feature to be tracked, set up gauss hybrid models as To Template;
C, read in the next frame image of video sequence, utilize expectation maximization EM algorithm iteration to calculate the gauss hybrid models parameter, obtain the candidate template that is complementary with To Template;
D, window is followed the tracks of in renewal according to EM algorithm iteration result calculated center, size and direction, and judge whether the picture frame that reads in is last frame, if then finish to work as pre-treatment, otherwise return step c.
Step a is described to be detected the moving target to be tracked in the video sequence frame, and detailed process is: adopt method of difference or optical flow method algorithm to detect to this two field picture, obtain mean vector θ → 0 = ( u 0 , v 0 ) ′ As the center of this target region, the Gauss's covariance matrix V that obtains 0As the original dimension of this target region,
Wherein: σ 1And σ 2Be the standard deviation that target signature to be tracked distributes, ρ is the related coefficient of characteristic distribution, with covariance matrix V 0The square root of two eigenwerts as the major axis and the minor axis of target to be tracked region, with related coefficient ρ through conversion as the anglec of rotation, in order to describe the direction of target to be tracked region.
The described gauss hybrid models of setting up of step b is as To Template, and detailed process is:
Center with target to be tracked region And original dimension V 0As the mean vector and the covariance matrix of Gaussian distribution, utilize the probability density function of two-dimentional Gaussian distribution to describe To Template again, this To Template respectively:
Wherein: It is the probability density function of two-dimentional Gaussian distribution; δ is the Delta function; Total number for target to be tracked region interior pixel; Function Be to be positioned at The mapping of the pixel grayscale at place; J is a gray level index in the histogram, j=1, and 2 ..., c, c are the sum of gray level on the histogram; C hIt is normaliztion constant.
The described process of utilizing the EM algorithm iteration to calculate the gauss hybrid models parameter of step c comprises the steps:
C1, will be from step a the initial center position of the target to be tracked of gained and original dimension initialize as the initial value of iterative computation;
C2, calculate center by tracked target in the current frame image And the Gaussian Mixture Distribution Model of the determined candidate target of size V (t), as candidate template, and utilize the To Template and this candidate template calculating priori weights of step b gained with this;
C3, execution EM algorithm calculate the gauss hybrid models parameter, obtain the center of present frame With size V (t+1) information;
C4, judge whether to stop the EM algorithm,, then finish the epicycle iterative process, obtain the candidate template that is complementary with To Template if determine parameter convergence according to stopping criterion for iteration; Otherwise, execution in step c5;
C5, utilize step c3 to obtain And V (t+1) composes respectively and gives And V (t), as the new center and the size of tracked moving target, determine center and the determined tracing area of size, the execution in step c2 then of new iteration point correspondence with this.
Center, size and direction that window is followed the tracks of in the described renewal of steps d, detailed process is:
With the gauss hybrid models parameter that obtains, comprise mean vector and covariance matrix, with the center of this mean vector as new tracking window, with the square root of two eigenwerts of covariance matrix as new major axis and the minor axis of following the tracks of window, the angle value that obtains with converting by the related coefficient to covariance matrix is as new sense of rotation of following the tracks of window, with the center, major axis, minor axis and angle value are described the renewal window of current tracked target, thereby the expression tracking results is then with tracked target position in the former frame image, size and tracking window information are as the input quantity of next frame image iteration.
The described candidate template of step c2 is: Wherein: With V be respectively the center and the size of candidate target.
The described priori weights of step c2 formula is: Wherein: function Be to be positioned at The pixel grayscale mapping at place; J is a gray level index in the histogram, j=1, and 2 ..., c, c are the sum of gray level on the histogram.
The described EM algorithm of step c3 comprises the steps:
C31, execution E step iteration are specially: keep And V (t) is constant, calculates β i(t+1) make G maximize, β i ( t + 1 ) = ω ( x → i ) · Gauss ( x → i ; θ → ( t ) , V ( t ) ) Σ i = 1 N θ → ( t ) , V ( t ) ω ( x → i ) · Gauss ( x → i ; θ → ( t ) , V ( t ) ) ;
C32, execution M step iteration are specially: keep β constant, estimate And V (t+1) makes formula: log f ≥ G ( θ → , V , β 1 , . . . β N θ → , V ) = Σ i = 1 N θ → , V log ( ω ( x → i ) Gauss ( x → i ; θ → , V ) β i ) β i In G maximization, obtain through the derivation of equation θ → ( t + 1 ) = Σ i = 1 N θ → ( t ) , V ( t ) β i ( t + 1 ) · x → i , V ( t + 1 ) = α · Σ i = 1 N θ → ( t ) , V ( t ) β i ( t + 1 ) · ( x → i - θ → ( t ) ) ( x → i - θ → ( t ) ) T , Wherein α is a balance factor.
The described stopping criterion for iteration of step c4 is meant | | &theta; &RightArrow; ( t + 1 ) - &theta; &RightArrow; ( t ) | | < &epsiv; | | V ( t + 1 ) - V ( t ) | | < &epsiv; ; Wherein ε be one greater than 0 positive number.
Video frequency motion target adaptive provided by the present invention is followed the tracks of the algorithm of window, has following advantage:
1) this method is set up gauss hybrid models by the distribution statistics information of extracting target to be tracked, has described the positional information and the dimension information of target exactly, has improved the reliability of the inventive method to tracked target.
2) adopt parameter estimation method, utilize EM algorithm iteration computation model parameter in the tracing process, result according to parameter estimation determines new tracking the window's position and size, realized following the tracks of the self-adaptation adjustment of window, make when accurately extracting the distribution statistics information of target signature, reduce the influence of background information, further improved the reliability of target information in the tracing process; Simultaneously, guaranteed that tracing process is not subjected to the influence of target size acute variation in image, has improved the stability of tracing process greatly.
3), thereby tracked target followed the tracks of have good real-time performance because the computing cost of the EM algorithm that adopted of this self-adaptation window width tracking and little.
4) accurate position and the size that tracked target is brought in constant renewal in the video sequence that utilizes the inventive method and obtained, can and rotate with moving target automatically for video camera or video camera autozoom controlled variable is provided, thereby reach the effect that improves whole tracker robustness.
Description of drawings
Fig. 1 is a video sequence processing procedure overview flow chart of the present invention;
Fig. 2 is the process flow diagram of EM algorithm iteration computation model parameter of the present invention;
Fig. 3 (a) is the Two dimensional Distribution curved surface synoptic diagram of gauss hybrid models;
Fig. 3 (b) is the Two dimensional Distribution curved surface plane projection synoptic diagram of gauss hybrid models;
Fig. 3 (c) carries out the model synoptic diagram that parameter estimation obtains for the present invention utilizes the EM algorithm;
The tracking effect synoptic diagram of the 1st frame when Fig. 4 (a) draws near for tracked target in the embodiment of the invention video sequence;
The tracking effect synoptic diagram of the 63rd frame when Fig. 4 (b) draws near for tracked target in the embodiment of the invention video sequence;
The tracking effect synoptic diagram of the 278th frame when Fig. 4 (c) draws near for tracked target in the embodiment of the invention video sequence;
Fig. 5 (a) is the tracking effect synoptic diagram of 18th frame of tracked target from the close-by examples to those far off time the in the embodiment of the invention video sequence;
Fig. 5 (b) is the tracking effect synoptic diagram of 229th frame of tracked target from the close-by examples to those far off time the in the embodiment of the invention video sequence;
Fig. 5 (c) is the tracking effect synoptic diagram of 308th frame of tracked target from the close-by examples to those far off time the in the embodiment of the invention video sequence.
Embodiment
The present invention is further detailed explanation below in conjunction with accompanying drawing and embodiments of the invention.
Core concept of the present invention is: provide a kind of sane, real-time video frequency motion target adaptive is followed the tracks of the tracking of window, by the moving target in the video sequence initial frame is detected, after having obtained the initial position and dimension information of target to be tracked, set up initial gauss hybrid models, as To Template, then according to the distribution statistics information calculations candidate template of the tracking target feature in the next frame image, priori weights by two template calculating pixels, utilization expectation maximization (EM, Expectation Maximization) algorithm iteration calculates model parameter, thereby obtain the candidate template the most similar to To Template, again with this candidate template as To Template, carrying out the EM algorithm iteration calculates, the model parameter of acquisition is brought in constant renewal in, thereby made tracking target position and size adaptive tracing window in sequence frame.
The inventive method can be handled the video sequence that video camera or video camera are taken acquisition in real time, also can handle the video file that obtains in advance.
Fig. 1 is a video sequence processing procedure overview flow chart of the present invention, and as shown in Figure 1, the realization of the inventive method comprises the steps:
Step 101: read in video sequence, video file is read in the buffer zone of Target Tracking System, the moving target in video sequence first frame is detected, obtain the initial position and the dimension information of target to be tracked.
Described Target Tracking System is meant to have computer system or the similar device that video information is handled.
Described moving target is detected, specifically: adopt method of difference or optical flow method scheduling algorithm to detect the center of the target to be tracked region that obtains present image Be mean vector and original dimension V 0, original dimension Gauss's covariance matrix V 0Expression:
σ wherein 1And σ 2Be used for representing the standard deviation that target signature to be tracked distributes, the related coefficient that the ρ representation feature distributes.Covariance matrix V 0The square root of two eigenwerts represent the major axis and the minor axis of target to be tracked region, related coefficient ρ can convert becomes the anglec of rotation, is used to describe the direction of target to be tracked region.
Described method of difference comprises frame-to-frame differences point-score and background subtraction point-score, and the frame-to-frame differences point-score is meant to utilize and subtracts each other between the adjacent or associated frame of video sequence and obtain moving target; The background subtraction point-score is meant that the present frame that adopts in the video sequence and the difference computing of reference background model detect moving target.
Described optical flow method is meant that adjacent two two field pictures that utilize video sequence calculate the intensity distributions of optical flow field, thereby and is partitioned into the method for moving target according to the outline position that intensity distributions detects target.Light stream is meant the variation of object at light source irradiation lower surface grayscale mode, and light stream has reflected the image change that is caused owing to motion in time interval dt, and optical flow field is a three-dimensional motion velocity field of representing object point by two dimensional image.
Step 102: utilize the initial center position that from step 101, obtains And original dimension V 0Information, and extract the distribution statistics information of moving target feature to be tracked, set up gauss hybrid models as To Template.
Described moving target feature to be tracked is meant gray scale, color, texture or other image feature amount of moving target, as color harmony saturation degree etc.When describing the distribution of these characteristic quantities, adopted the probability-weighted density function of carrying space positional information, be the gauss hybrid models of characteristic quantity intensity, because this model considered the spatial positional information of pixel, thereby to have improved the robustness of describing characteristic quantity be robustness.
Present embodiment is example (way is similar therewith in the extraction of image feature amount such as color, texture) with the gray feature that extracts moving target, illustrates that the distribution statistics information of utilizing target signature to be tracked sets up the process of gauss hybrid models:
Order is through the center of the resulting target to be tracked of step 101 region &theta; &RightArrow; 0 = ( u 0 , v 0 ) &prime; With original dimension V 0Be respectively the mean vector and the covariance matrix of Gaussian distribution, then
The probability density function of two dimension Gaussian distribution can be expressed as:
Gauss ( x &RightArrow; i ; &theta; &RightArrow; 0 , V 0 ) = 1 2 &pi; | V 0 | 1 2 exp { - 1 2 ( x &RightArrow; i - &theta; &RightArrow; 0 ) T V 0 - 1 ( x &RightArrow; 0 - &theta; &RightArrow; 0 ) } - - - ( 2 )
Therefore, the gauss hybrid models of To Template can be described with following formula:
Wherein: δ is the Delta function, Be total number of target to be tracked region interior pixel, function Be to be positioned at The mapping of the pixel grayscale at place, j is a gray level index in the histogram, j=1,2 ..., c, (c is the sum of gray level on the histogram), C hIt is normaliztion constant.
Step 103: the next frame image of video sequence is read in buffer area, utilize the EM algorithm iteration to calculate the gauss hybrid models parameter of tracked target in the present frame, obtain the candidate template the most similar to To Template.
Described EM algorithm is the algorithm of maximum likelihood parameter estimation under the non-complete data qualification of a kind of iterative, and promptly given certain metric data x and the model of describing with parameter θ are tried to achieve the θ value when making likelihood function maximum, available following equation expression: &theta; ^ * = arg max &theta; p ( x | &theta; ) , This algorithm has reliable global convergence in most cases.Video frequency motion target tracking problem among the present invention can be considered to a non-complete data problem, therefore, can utilize the EM algorithm frame that the parameter of describing target location and dimension information is estimated, and realize the self-adaptation of target following window with the parameter that estimates.
The gauss hybrid models parameter of described tracked target is: mean vector, i.e. target's center position; Covariance matrix is the size of target.
Therefore, the target following process can be regarded the process of in the present frame search candidate template the most similar to To Template as, and the target following problem is converted into the maximized problem of coefficient of similarity between To Template and the candidate template of finding the solution.
Described candidate template is:
Wherein With V be respectively that the center of candidate target and size are described.
Described coefficient of similarity is the Bhattacharya coefficient:
Wherein Be To Template, It is candidate template.
Step 104:, upgrade center, size and the direction of adaptive tracing window according to the result of EM iterative computation.
The center of described renewal adaptive tracing window, size and direction, detailed process is as follows: with the gauss hybrid models parameter that obtains, comprise mean vector and covariance matrix two parts, with the center of this mean vector as new tracking window, with the square root of two eigenwerts of covariance matrix as new major axis and the minor axis of following the tracks of window, it is that the angle value that the related coefficient conversion obtains is represented that the sense of rotation of tracking window is used by the minor diagonal element to covariance matrix, by the position, major axis, the renewal window of minor axis and the current target to be tracked of angle value approximate description, thereby expression tracking results.With information such as target location to be tracked, size and tracking window in the former frame image all as the input quantity of next frame image iteration, repeating step 103~step 104, this tracing process can find the Gaussian Mixture distribution pattern the most similar to object module, therefore, this mode profile can be complementary with the target of being followed the tracks of.
Described angle is meant the angle of the transverse axis positive axis of the major axis of following the tracks of window and image coordinate system.
Step 105: judge whether the video sequence frame that reads in is last frame, if last frame just finishes this tracing process; Otherwise, return execution in step 103.
Whether the video sequence that described judgement is read in is last frame, is to finish by the trace command that stops that whether video sequence that reads tracker finishes or the system of receiving sends.
More than describe, be described in detail at step 103 of the present invention below for the overall process of disposal route of the present invention:
Target's center to be tracked position in the known initial frame Covariance matrix V 0And object module Carry out the EM iteration, in iterative process, establish And V (t) is the input value in t step, and Fig. 2 is the process flow diagram of EM algorithm iteration computation model parameter of the present invention, as shown in Figure 2, utilizes the EM algorithm iteration to calculate the process of the gauss hybrid models parameter of tracking target in the present frame, comprises the steps:
Step 200: will be mean vector at first from the initial center position of the target to be tracked current (initially) frame that step 101 obtains With original dimension be initial covariance matrix V 0As iterative initial value.
Step 201: calculate center by the target to be tracked in the present frame With the Gaussian Mixture Distribution Model of the definite candidate target of size V (t), as candidate template
Described candidate template is:
Wherein With V be respectively that the position of candidate target and size are described.
Step 202: calculate the priori weights with To Template and candidate template
According to above-mentioned steps, To Template and candidate template are known, and coefficient of similarity is the Bhattacharya coefficient here:
Wherein Be To Template, It is candidate template; Formula (5) is carried out Taylor (Taylor) launches, obtain:
&rho; &ap; c 1 + c 2 &Sigma; i = 1 N &theta; &RightArrow; , V &omega; ( x &RightArrow; i ) Gauss ( x &RightArrow; i ; &theta; &RightArrow; , V ) - - - ( 6 )
In the formula (6), Be the priori weights:
Function wherein Be to be positioned at The pixel grayscale mapping at place, j is a gray level index in the histogram, j=1,2 ..., c, c are the sum of gray level on the histogram.The implication of the expression of formula (7) is: in present frame, if pixel Gray-scale value The ratio that accounts in object module is big more, Just big more, mean The probability that is pixel in the target is also just big more.
Order f = &Sigma; i = 1 N &theta; &RightArrow; , V &omega; ( x &RightArrow; i ) Gauss ( x &RightArrow; i ; &theta; &RightArrow; , V ) - - - ( 8 )
As can be seen, obtain maximal value, then must allow f obtain maximal value in order to make ρ.And maximization f can be with the EM iterative algorithm to Gauss model parameter among the f Asking for optimal estimation with V realizes.So, make ρ get peaked two parameters Just represented the position and the size value of target in the present frame with V.
Step 203: carry out the EM iterative computation, wherein E step iteration is:
Keep And V (t) is constant, calculates β i(t+1) make G maximize, computing formula is as follows:
&beta; i ( t + 1 ) = &omega; ( x &RightArrow; i ) &CenterDot; Gauss ( x &RightArrow; i ; &theta; &RightArrow; ( t ) , V ( t ) ) &Sigma; i = 1 N &theta; &RightArrow; ( t ) , V ( t ) &omega; ( x &RightArrow; i ) &CenterDot; Gauss ( x &RightArrow; i ; &theta; &RightArrow; ( t ) , V ( t ) ) - - - ( 9 )
Give birth to (Jensen) inequality according to qin, gauss hybrid models satisfies following formula:
log f &GreaterEqual; G ( &theta; &RightArrow; , V , &beta; 1 , . . . &beta; N &theta; &RightArrow; , V ) = &Sigma; i = 1 N &theta; &RightArrow; , V log ( &omega; ( x &RightArrow; i ) Gauss ( x &RightArrow; i ; &theta; &RightArrow; , V ) &beta; i ) &beta; i - - - ( 10 )
β wherein iFor satisfying &Sigma; i = 1 N &theta; &RightArrow; , V &beta; i = 1 And β i0 arbitrary constant.
Then, carry out the M step iteration of EM iterative computation:
Keep β constant, estimate And V (t+1) makes G maximization in the formula (10), promptly
&Sigma; i = 1 N &theta; &RightArrow; ( t ) , V ( t ) &beta; i ( t + 1 ) &PartialD; ln Gauss ( x &RightArrow; i ; &Theta; ) / &PartialD; &Theta; = 0 - - - ( 11 )
In the formula (11) &Theta; = ( &theta; &RightArrow; , V ) , By (11) Shi Kede:
&theta; &RightArrow; ( t + 1 ) = &Sigma; i = 1 N &theta; &RightArrow; ( t ) , V ( t ) &beta; i ( t + 1 ) &CenterDot; x &RightArrow; i - - - ( 12 )
V ( t + 1 ) = &alpha; &CenterDot; &Sigma; i = 1 N &theta; &RightArrow; ( t ) , V ( t ) &beta; i ( t + 1 ) &CenterDot; ( x &RightArrow; i - &theta; &RightArrow; ( t ) ) ( x &RightArrow; i - &theta; &RightArrow; ( t ) ) T - - - ( 13 )
α is a balance factor in the formula, and its theoretical value is 1, but it depends on the influence degree of image background noise to target image in actual applications.Though the error of α can cause the error of calculation of formula (13), because the iteration self-adapting of EM algorithm, this error is an acceptable, gets α ≈ 1.2 in the present embodiment.
Step 204: according to stopping criterion for iteration, judge whether to stop the EM algorithm, if determine parameter convergence, execution in step 206; Otherwise execution in step 205 is also returned the iterative computation that step 201 is proceeded next round.
The stopping criterion for iteration of described EM algorithm is:
| | &theta; &RightArrow; ( t + 1 ) - &theta; &RightArrow; ( t ) | | < &epsiv; | | V ( t + 1 ) - V ( t ) | | < &epsiv; - - - ( 14 )
Wherein ε is abundant little positive number, in computation process, it is more little that ε chooses, the parameters precision that obtains is high more, be that target tracking accuracy is high more, what still bring is that iterations increases thereupon, the real-time variation of algorithm, therefore in practice, can only the compromise consideration of between choose suitable ε.
Choose a pixel value in the present embodiment as ε, the norm value that are about to two step of front and back parameter difference in the iterative process are calculated convergence less than a pixel value.Described norm value is meant the infinite norm of vector for mean vector θ, i.e. the maximal value of all elements absolute value in the vector is meant the infinite norm of matrix, the i.e. maximal value of every row element absolute value summation in the matrix for covariance matrix V.
Described pixel value is meant the resolution of video camera or video camera, and it is determined by the light activated element number on the photoelectric sensor in video camera or the video camera, the just corresponding pixel of light activated element.
Step 205: will And V (t+1) composes respectively and gives And V (t) continuation iteration, and determine new iteration point position and the determined zone of size, execution in step 201 then.
Describedly determine new iteration point position and size, be meant with And V (t+1) composes respectively and gives And V (t), the substitution candidate template is to continue iteration in the formula (4).
Step 206: finish the epicycle iterative process, obtain the candidate template that is complementary with To Template.
At this moment, with new estimated value And V (t+1) gives step 104 as the new center and the size of tracked moving target with the result, follows the tracks of the renewal of window, finishes up to following the tracks of processing procedure.
Fig. 3 (a) is the Two dimensional Distribution curved surface synoptic diagram of gauss hybrid models, in order to describe the model of tracked moving target.
Fig. 3 (b) is the Two dimensional Distribution plane projection synoptic diagram of gauss hybrid models, shown in Fig. 3 (b), be to two-dimentional Gaussian distribution curved surface when getting highly the point that equates, obtain a series of similar and the ellipse that direction is consistent.According to " the 3-sigma rule " of Gaussian distribution, the ellipse that obtains when getting 3 times of standard deviation scopes will comprise the data point more than 99%.
Fig. 3 (c) carries out the model synoptic diagram that parameter estimation obtains for the present invention utilizes the EM algorithm, shown in Fig. 3 (c), with the gauss hybrid models parameter of EM iterative algorithm to target to be tracked in the resulting present frame, the ellipse that draws is consistent with the contour ellipse of Gaussian distribution.
Fig. 4 (a), Fig. 4 (b), Fig. 4 (c) are respectively the tracking effect synoptic diagram of the 1st frame when tracked target draws near in the embodiment of the invention video sequence, the 63rd frame, the 278th frame.As shown in the figure, the upper right corner of each width of cloth figure is the target area partial enlarged drawing.When target takes place to turn, follow the tracks of window and can in time adjust size and direction, accurate description target.
Fig. 5 (a), Fig. 5 (b), Fig. 5 (c) are respectively the tracking effect synoptic diagram of the 18th frame from the close-by examples to those far off time of tracked target in the embodiment of the invention video sequence, the 229th frame, the 308th frame.As shown in the figure, the lower right corner is the target area partial enlarged drawing among each width of cloth figure.When target during gradually away from video camera or video camera, follow the tracks of window and can diminish with the picture size of target and diminish, in the accurate description target size, reduce the influence of background information.
The above is preferred embodiment of the present invention only, is not to be used to limit protection scope of the present invention.

Claims (9)

1. the tracking of a video movement objective self-adapting window is characterized in that, this method comprises the steps:
A, read in video sequence, the moving target to be tracked in the video sequence frame is detected, obtain the initial center position of this target And original dimension V 0Information;
The initial center position that b, utilization obtain from step a And original dimension V 0Information, and extract the distribution statistics information of moving target feature to be tracked, set up gauss hybrid models as To Template;
C, read in the next frame image of video sequence, utilize expectation maximization EM algorithm iteration to calculate the gauss hybrid models parameter, obtain the candidate template that is complementary with To Template;
D, window is followed the tracks of in renewal according to EM algorithm iteration result calculated center, size and direction, and judge whether the picture frame that reads in is last frame, if then finish to work as pre-treatment, otherwise return step c.
2. tracking according to claim 1 is characterized in that, step a is described to be detected the moving target to be tracked in the video sequence frame, and detailed process is: adopt method of difference or optical flow method algorithm to detect to this two field picture, obtain mean vector &theta; &RightArrow; 0 = ( u 0 , v 0 ) As the center of this target region, the Gauss's covariance matrix V that obtains 0As the original dimension of this target region,
Wherein: σ 1And σ 2Be the standard deviation that target signature to be tracked distributes, ρ is the related coefficient of characteristic distribution, with covariance matrix V 0The square root of two eigenwerts as the major axis and the minor axis of target to be tracked region, with related coefficient ρ through conversion as the anglec of rotation, in order to describe the direction of target to be tracked region.
3. tracking according to claim 1 is characterized in that, the described gauss hybrid models of setting up of step b is as To Template, and detailed process is:
Center with target to be tracked region And original dimension V 0As the mean vector and the covariance matrix of Gaussian distribution, utilize the probability density function of two-dimentional Gaussian distribution to describe To Template again, this To Template respectively:
Wherein: It is the probability density function of two-dimentional Gaussian distribution; δ is the Delta function; Total number for target to be tracked region interior pixel; Function Be to be positioned at The mapping of the pixel grayscale at place; J is a gray level index in the histogram, j=1, and 2 ..., c, c are the sum of gray level on the histogram; C hIt is normaliztion constant.
4. tracking according to claim 1 is characterized in that, the described process of utilizing the EM algorithm iteration to calculate the gauss hybrid models parameter of step c comprises the steps:
C1, will be from step a the initial center position of the target to be tracked of gained and original dimension initialize as the initial value of iterative computation;
C2, calculate center by tracked target in the current frame image And the Gaussian Mixture Distribution Model of the determined candidate target of size V (t), as candidate template, and utilize the To Template and this candidate template calculating priori weights of step b gained with this;
C3, execution EM algorithm calculate the gauss hybrid models parameter, obtain the center of present frame With size V (t+1) information;
C4, judge whether to stop the EM algorithm,, then finish the epicycle iterative process, obtain the candidate template that is complementary with To Template if determine parameter convergence according to stopping criterion for iteration; Otherwise, execution in step c5;
C5, utilize step c3 to obtain And V (t+1) composes respectively and gives And V (t), as the new center and the size of tracked moving target, determine center and the determined tracing area of size, the execution in step c2 then of new iteration point correspondence with this.
5. tracking according to claim 1 is characterized in that, center, size and direction that window is followed the tracks of in the described renewal of steps d, and detailed process is:
With the gauss hybrid models parameter that obtains, comprise mean vector and covariance matrix, with the center of this mean vector as new tracking window, with the square root of two eigenwerts of covariance matrix as new major axis and the minor axis of following the tracks of window, the angle value that obtains with converting by the related coefficient to covariance matrix is as new sense of rotation of following the tracks of window, with the center, major axis, minor axis and angle value are described the renewal window of current tracked target, thereby the expression tracking results is then with tracked target position in the former frame image, size and tracking window information are as the input quantity of next frame image iteration.
6. according to claim 1 or 4 described trackings, it is characterized in that the described candidate template of step c2 is: Wherein: With V be respectively the center and the size of candidate target.
7. according to claim 1 or 4 described trackings, it is characterized in that the described priori weights of step c2 formula is: Wherein: function Be to be positioned at The pixel grayscale mapping at place; J is a gray level index in the histogram, j=1, and 2 ..., c, c are the sum of gray level on the histogram.
8. according to claim 1 or 4 described trackings, it is characterized in that the described EM algorithm of step c3 comprises the steps:
C31, execution E step iteration are specially: keep And V (t) is constant, calculates β i(t+1) make G maximize, &beta; i ( t + 1 ) = &omega; ( x &RightArrow; i ) &CenterDot; Gauss ( x &RightArrow; i ; &theta; &RightArrow; ( t ) , V ( t ) ) &Sigma; i = 1 N &theta; &RightArrow; ( t ) V ( t ) &omega; ( x &RightArrow; i ) &CenterDot; Gauss ( x &RightArrow; i ; &theta; &RightArrow; ( t ) , V ( t ) ) ;
C32, execution M step iteration are specially: keep β constant, estimate And V (t+1) makes formula: log f &GreaterEqual; G ( &theta; &RightArrow; , V , &beta; 1 , . . . &beta; N &theta; &RightArrow; , V ) = &Sigma; i = 1 N &theta; &RightArrow; , V log ( &omega; ( x &RightArrow; i ) Gauss ( x &RightArrow; i ; &theta; &RightArrow; , V ) &beta; i ) &beta; i In G maximization, obtain through the derivation of equation &theta; &RightArrow; ( t + 1 ) = &Sigma; i = 1 N &theta; &RightArrow; ( t ) , V ( t ) &beta; i ( t + 1 ) &CenterDot; x &RightArrow; i , V ( t + 1 ) = &alpha; &CenterDot; &Sigma; i = 1 N &theta; &RightArrow; ( t ) , V ( t ) &beta; i ( t + 1 ) &CenterDot; ( x &RightArrow; i - &theta; &RightArrow; ( t ) ) ( x &RightArrow; i - &theta; &RightArrow; ( t ) ) T , Wherein α is a balance factor.
9. according to claim 1 or 4 described trackings, it is characterized in that the described stopping criterion for iteration of step c4 is meant | | &theta; &RightArrow; ( t + 1 ) - &theta; &RightArrow; ( t ) | | < &epsiv; | | V ( t + 1 ) - V ( t ) | | < &epsiv; ; Wherein ε be one greater than 0 positive number.
CN2007101202284A 2007-08-13 2007-08-13 Tracing method for video movement objective self-adapting window Expired - Fee Related CN101369346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007101202284A CN101369346B (en) 2007-08-13 2007-08-13 Tracing method for video movement objective self-adapting window

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007101202284A CN101369346B (en) 2007-08-13 2007-08-13 Tracing method for video movement objective self-adapting window

Publications (2)

Publication Number Publication Date
CN101369346A true CN101369346A (en) 2009-02-18
CN101369346B CN101369346B (en) 2010-09-15

Family

ID=40413148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101202284A Expired - Fee Related CN101369346B (en) 2007-08-13 2007-08-13 Tracing method for video movement objective self-adapting window

Country Status (1)

Country Link
CN (1) CN101369346B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599175A (en) * 2009-06-11 2009-12-09 北京中星微电子有限公司 Determine the detection method and the image processing equipment of alteration of shooting background
CN101968884A (en) * 2009-07-28 2011-02-09 索尼株式会社 Method and device for detecting target in video image
CN101998115A (en) * 2010-10-27 2011-03-30 江苏科技大学 Embedded-type network camera with passenger flow counting function and passenger flow counting method
CN102005041A (en) * 2010-11-02 2011-04-06 浙江大学 Characteristic point matching method aiming at image sequence with circulation loop
CN102073851A (en) * 2011-01-13 2011-05-25 北京科技大学 Method and system for automatically identifying urban traffic accident
CN102074016A (en) * 2009-11-24 2011-05-25 杭州海康威视软件有限公司 Device and method for automatically tracking motion target
CN102103754A (en) * 2009-12-21 2011-06-22 佳能株式会社 Subject tracking apparatus, subject region extraction apparatus, and control methods therefor
CN102122352A (en) * 2011-03-01 2011-07-13 西安电子科技大学 Characteristic value distribution statistical property-based polarized SAR image classification method
CN101739692B (en) * 2009-12-29 2012-05-30 天津市亚安科技股份有限公司 Fast correlation tracking method for real-time video target
CN102592125A (en) * 2011-12-20 2012-07-18 福建省华大数码科技有限公司 Moving object detection method based on standard deviation characteristic
CN101901334B (en) * 2009-05-31 2013-09-11 汉王科技股份有限公司 Static object detection method
CN104156976A (en) * 2013-05-13 2014-11-19 哈尔滨点石仿真科技有限公司 Multiple characteristic point tracking method for detecting shielded object
CN104199902A (en) * 2014-08-27 2014-12-10 中国科学院自动化研究所 Similarity measurement computing method of linear dynamical systems
CN104394313A (en) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 Special effect video generating method and device
CN104700415A (en) * 2015-03-23 2015-06-10 华中科技大学 Method of selecting matching template in image matching tracking
CN105744152A (en) * 2014-12-25 2016-07-06 佳能株式会社 Object Tracking Apparatus, Control Method Therefor And Storage Medium
CN105975923A (en) * 2016-05-03 2016-09-28 湖南拓视觉信息技术有限公司 Method and system for tracking human object
CN106023062A (en) * 2016-05-20 2016-10-12 深圳市大疆创新科技有限公司 Data processing method, system and device based on window operation
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106127811A (en) * 2016-06-30 2016-11-16 西北工业大学 Target scale adaptive tracking method based on context
CN106485733A (en) * 2016-09-22 2017-03-08 电子科技大学 A kind of method following the tracks of interesting target in infrared image
CN107818577A (en) * 2017-10-26 2018-03-20 滁州学院 A kind of Parts Recognition and localization method based on mixed model
WO2018119606A1 (en) * 2016-12-26 2018-07-05 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for representing a map element and method and apparatus for locating vehicle/robot
WO2018228021A1 (en) * 2017-06-16 2018-12-20 北京京东尚科信息技术有限公司 Method and apparatus for determining target rotation direction, computer readable medium and electronic device
CN111539993A (en) * 2020-04-13 2020-08-14 中国人民解放军军事科学院国防科技创新研究院 Space target visual tracking method based on segmentation
CN112381053A (en) * 2020-12-01 2021-02-19 连云港豪瑞生物技术有限公司 Environment-friendly monitoring system with image tracking function

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1741069A (en) * 2005-09-22 2006-03-01 上海交通大学 Probability video tracing method based on adaptive surface model

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901334B (en) * 2009-05-31 2013-09-11 汉王科技股份有限公司 Static object detection method
CN101599175A (en) * 2009-06-11 2009-12-09 北京中星微电子有限公司 Determine the detection method and the image processing equipment of alteration of shooting background
CN101599175B (en) * 2009-06-11 2014-04-23 北京中星微电子有限公司 Detection method for determining alteration of shooting background and image processing device
CN101968884A (en) * 2009-07-28 2011-02-09 索尼株式会社 Method and device for detecting target in video image
CN102074016A (en) * 2009-11-24 2011-05-25 杭州海康威视软件有限公司 Device and method for automatically tracking motion target
CN102103754A (en) * 2009-12-21 2011-06-22 佳能株式会社 Subject tracking apparatus, subject region extraction apparatus, and control methods therefor
CN102103754B (en) * 2009-12-21 2014-05-07 佳能株式会社 Subject tracking apparatus, subject region extraction apparatus, and control methods therefor
CN101739692B (en) * 2009-12-29 2012-05-30 天津市亚安科技股份有限公司 Fast correlation tracking method for real-time video target
CN101998115A (en) * 2010-10-27 2011-03-30 江苏科技大学 Embedded-type network camera with passenger flow counting function and passenger flow counting method
CN102005041A (en) * 2010-11-02 2011-04-06 浙江大学 Characteristic point matching method aiming at image sequence with circulation loop
CN102005041B (en) * 2010-11-02 2012-11-14 浙江大学 Characteristic point matching method aiming at image sequence with circulation loop
CN102073851B (en) * 2011-01-13 2013-01-02 北京科技大学 Method and system for automatically identifying urban traffic accident
CN102073851A (en) * 2011-01-13 2011-05-25 北京科技大学 Method and system for automatically identifying urban traffic accident
CN102122352B (en) * 2011-03-01 2012-10-24 西安电子科技大学 Characteristic value distribution statistical property-based polarized SAR image classification method
CN102122352A (en) * 2011-03-01 2011-07-13 西安电子科技大学 Characteristic value distribution statistical property-based polarized SAR image classification method
CN102592125A (en) * 2011-12-20 2012-07-18 福建省华大数码科技有限公司 Moving object detection method based on standard deviation characteristic
CN104156976A (en) * 2013-05-13 2014-11-19 哈尔滨点石仿真科技有限公司 Multiple characteristic point tracking method for detecting shielded object
CN104199902A (en) * 2014-08-27 2014-12-10 中国科学院自动化研究所 Similarity measurement computing method of linear dynamical systems
CN104394313A (en) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 Special effect video generating method and device
US10013632B2 (en) 2014-12-25 2018-07-03 Canon Kabushiki Kaisha Object tracking apparatus, control method therefor and storage medium
CN105744152A (en) * 2014-12-25 2016-07-06 佳能株式会社 Object Tracking Apparatus, Control Method Therefor And Storage Medium
CN105744152B (en) * 2014-12-25 2019-06-18 佳能株式会社 Subject tracing equipment, picture pick-up device and subject method for tracing
CN104700415B (en) * 2015-03-23 2018-04-24 华中科技大学 The choosing method of matching template in a kind of images match tracking
CN104700415A (en) * 2015-03-23 2015-06-10 华中科技大学 Method of selecting matching template in image matching tracking
CN105975923B (en) * 2016-05-03 2020-02-21 湖南拓视觉信息技术有限公司 Method and system for tracking human objects
CN105975923A (en) * 2016-05-03 2016-09-28 湖南拓视觉信息技术有限公司 Method and system for tracking human object
CN106023062B (en) * 2016-05-20 2018-06-26 深圳市大疆创新科技有限公司 Data processing method, system and device based on window operation
CN106023062A (en) * 2016-05-20 2016-10-12 深圳市大疆创新科技有限公司 Data processing method, system and device based on window operation
CN106097388B (en) * 2016-06-07 2018-12-18 大连理工大学 The method that target prodiction, searching scope adaptive adjustment and Dual Matching merge in video frequency object tracking
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106127811A (en) * 2016-06-30 2016-11-16 西北工业大学 Target scale adaptive tracking method based on context
CN106485733A (en) * 2016-09-22 2017-03-08 电子科技大学 A kind of method following the tracks of interesting target in infrared image
WO2018119606A1 (en) * 2016-12-26 2018-07-05 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for representing a map element and method and apparatus for locating vehicle/robot
WO2018228021A1 (en) * 2017-06-16 2018-12-20 北京京东尚科信息技术有限公司 Method and apparatus for determining target rotation direction, computer readable medium and electronic device
RU2754641C2 (en) * 2017-06-16 2021-09-06 Бейдзин Цзиндун Шанкэ Информейшн Текнолоджи Ко., Лтд. Method and device for determining direction of rotation of target object, computer-readable media and electronic device
US11120269B2 (en) 2017-06-16 2021-09-14 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and apparatus for determining target rotation direction, computer readable medium and electronic device
CN107818577A (en) * 2017-10-26 2018-03-20 滁州学院 A kind of Parts Recognition and localization method based on mixed model
CN111539993A (en) * 2020-04-13 2020-08-14 中国人民解放军军事科学院国防科技创新研究院 Space target visual tracking method based on segmentation
CN112381053A (en) * 2020-12-01 2021-02-19 连云港豪瑞生物技术有限公司 Environment-friendly monitoring system with image tracking function

Also Published As

Publication number Publication date
CN101369346B (en) 2010-09-15

Similar Documents

Publication Publication Date Title
CN101369346B (en) Tracing method for video movement objective self-adapting window
CN101499128B (en) Three-dimensional human face action detecting and tracing method based on video stream
CN103149939B (en) A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
Wang et al. Lane detection using catmull-rom spline
CN101673403B (en) Target following method in complex interference scene
Szeliski et al. Robust shape recovery from occluding contours using a linear smoother
CN101252677B (en) Object tracking method based on multi-optical spectrum image sensor
CN104616318A (en) Moving object tracking method in video sequence image
CN101777116A (en) Method for analyzing facial expressions on basis of motion tracking
Nakamura Real-time 3-D object tracking using Kinect sensor
CN103247032B (en) A kind of faint Extended target localization method based on pose compensation
EP3525000B1 (en) Methods and apparatuses for object detection in a scene based on lidar data and radar data of the scene
Salvi et al. Visual SLAM for 3D large-scale seabed acquisition employing underwater vehicles
CN102708382B (en) Multi-target tracking method based on variable processing windows and variable coordinate systems
Astrom et al. Motion estimation in image sequences using the deformation of apparent contours
CN101127121A (en) Target tracking algorism based on self-adaptive initial search point forecast
EP3525131A1 (en) Methods and apparatuses for object detection in a scene represented by depth data of a range detection sensor and image data of a camera
Park et al. Robust photogeometric localization over time for map-centric loop closure
CN102592290A (en) Method for detecting moving target region aiming at underwater microscopic video
Marimon et al. Particle filter-based camera tracker fusing marker and feature point cues
Song et al. Closed-loop tracking and change detection in multi-activity sequences
Rahimi et al. Tracking people with a sparse network of bearing sensors
Shahrokni et al. Fast texture-based tracking and delineation using texture entropy
Einhorn et al. A Hybrid Kalman Filter Based Algorithm for Real-time Visual Obstacle Detection.
Tomono Building an object map for mobile robots using LRF scan matching and vision-based object recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EXPY Termination of patent right or utility model
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100915

Termination date: 20140813