CN106127811A - Target scale adaptive tracking method based on context - Google Patents

Target scale adaptive tracking method based on context Download PDF

Info

Publication number
CN106127811A
CN106127811A CN201610502966.4A CN201610502966A CN106127811A CN 106127811 A CN106127811 A CN 106127811A CN 201610502966 A CN201610502966 A CN 201610502966A CN 106127811 A CN106127811 A CN 106127811A
Authority
CN
China
Prior art keywords
target
area
region
formula
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610502966.4A
Other languages
Chinese (zh)
Inventor
蒋晓悦
邹贽丞
冯晓毅
李会方
吴俊�
谢红梅
何贵青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201610502966.4A priority Critical patent/CN106127811A/en
Publication of CN106127811A publication Critical patent/CN106127811A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Abstract

nullThe invention provides a kind of target scale adaptive tracking method based on context,Relate to a kind of image trace field,The present invention is based on existing dimension self adaptation mean shift algorithm,Use target scale adjustment algorithm based on appearance features and contextual information that target scale Regulation mechanism is improved,The method of the present invention mainly includes that rescaling type based on contextual information determines,And call Tuning function and utilize dimension calculation two parts of apparent information and contextual information,By introducing the contextual information of target scale in Regulation mechanism,On the basis of considering apparent information,Careful classification is carried out to adjusting type with the change of yardstick contextual information,Thus improve former algorithm to target scale Adjusting accuracy on target area numerical value and area of effective coverage,The former SOAMS algorithm Adjusting accuracy to target scale can be effectively improved.

Description

Target scale adaptive tracking method based on context
Technical field
The present invention relates to a kind of image trace field, especially target scale tracking.
Background technology
Show according to " national economy and social development statistical communique in 2014 " data, to the end of the year 2014, Chinese civilian vapour Car quantity records high, and vehicle accident simultaneously is multiple with uncivil vehicular behavior.In the face of day by day complicated problem of management, how By monitoring system, vehicle is carried out adaptive tracing and become the emphasis of current goal Research on Acquisition and Tracking Technologies.Center with target Comparing, the identification follow-up to target of the real-time scale of target is significant with classification.Current track algorithm is according to being used The difference of object module can be divided into track algorithm based on apparent model and track algorithm based on motion model, wherein, table See model algorithm to be tracked, the yardstick of target compared to motion model algorithm with the characteristics of motion pair with the change of target characteristic Target scale carries out estimating the most credible.Apparent model can be subdivided into description type algorithm and classifying type according to its method for establishing model Algorithm.Target characteristic is described by description type algorithm, distinguish type algorithm with target-background feature set up before-background class device With the means of classification, target is split from image.Visible, classifying type algorithm obtains during following the tracks of target simultaneously The scale size of target, but the realization of classifying type algorithm needs to train reliably based on big number of samples, compared to description Realize complex for type algorithm.But, the adaptive scale adjustment capability of current description type algorithm has much room for improvement.At present, The innovatory algorithm occurred has fixed step size bandwidth increments method, change window width method based on metric space, iteration to update yardstick description With reference to method and cooperated computing method based on affine transformation.But, the result of fixed step size Bandwidth Method does not adapt to target size Becoming big situation, metric space method, excessive with reference to the amount of calculation of updating method, affine transformation cooperated computing method needs to calculate a large amount of imitative Penetrate parameter.
The Hong Kong Polytechnic University professor Zhang Lei et al. proposes dimension self adaptation average drifting (Scale and Orientation Adaptive Mean Shift, SOAMS) algorithm with set up target scale self-adaptative adjustment mechanism, SOAMS The core concept of algorithm is to use the oval son that describes comprising square information to be indicated target shape with yardstick, and uses candidate The candidate region area describing subrepresentation is modified by Pasteur's coefficient in region.Makeover process is for using Pasteur's coefficient BiCalculate Regulation coefficient c, uses this coefficient to be corrected candidate region area, is shown below:
c = exp ( B i - 1 σ ) - - - ( 1 )
A=c*N (2)
Wherein N is current goal candidate region size, BiFor Pasteur's coefficient of current goal candidate region, σ is letter Number gradient regulation coefficient, A is revised candidate region area.
This algorithm has the obvious disadvantage that, shows that Pasteur's coefficient is only similar between sign candidate region and object module A kind of index of degree, and between target scale, there is not relation one to one.This non-one-to-one relationship shows time Favored area area excessive or too small time all can cause the situation that Pasteur's coefficient is less.This non-one-to-one relationship is using Pasteur Error can be caused when candidate region area is adjusted by coefficient to increase causes algorithm validity to reduce with accuracy.
Summary of the invention
In order to overcome the deficiencies in the prior art, the present invention is directed to SOAMS algorithm in rescaling mechanism, only use Pasteur Coefficient causes the problem that can not accurately follow the tracks of target scale change in target amplification process, proposes a kind of based on contextual information Improvement Regulation mechanism.First this method judges target to according to the size in best candidate region with the relation of context yardstick Kinestate is so that it is determined that Tuning function is interval, and then in Tuning function interval, similarity coefficient according to present frame calculates and adjusts Parameter.
For the change of target scale, we build the adjustment letter with Pasteur's coefficient as variable according to the distribution of sample data Count, and the potential relatedness that reference target yardstick is within a context determines suitable adjustment classification.Rescaling process is divided into three In the individual stage, first obtaining object candidate area, this region will be regarded as the target location that credibility is higher, and adds up the bar in this region Family name's coefficient.Then according to context yardstick relatedness, the variation tendency of target scale is determined.Finally, Tuning function is called to mesh Scale is determined, and obtains the target scale possessing high confidence of comprehensive consideration current information and historical information.
The technical solution adopted for the present invention to solve the technical problems is based on SOAMS algorithm, uses based on apparent Target scale Regulation mechanism is improved by the target scale adjustment algorithm of feature and contextual information, and the inventive method is mainly wrapped Include rescaling type based on contextual information to determine, and call Tuning function and utilize apparent information and contextual information Dimension calculation two parts.
The present invention specifically comprises the following steps that
The first step: determine target in video sequence initial frame and calculate object module
Object module is by the collection of pixels { x with target's center as initial pointi}I=1,2 ... N, composition, N represents target area The number of pixel, represents the aspect indexing in target characteristic space with fe, then target can be modeled as:
q = { q f e } f e = 1 , 2 , ... , M q f e = C Σ i = 1 N k ( | | x i h | | 2 ) δ [ b ( x i ) - f e ] - - - ( 3 )
In formula:
Q object module;
qfeThe fe feature probability on object module;
M feature space number of components
δ selects function;
b(xi) color feature value at respective pixel;
K (x) Weighted Kernel function, for the weights that the pixel distribution remote away from center is less;
||xi||2Calculate pixel modulus value, characterize the pixel distance away from center;
C normalization coefficient, calculating formula is as follows
C = 1 Σ i = 1 N k ( | | x i h | | 2 ) ; - - - ( 4 )
H kernel function window width, characterizes the size following the tracks of forms;
Second step: calculate square statistical nature and the variance matrix of target area, determine region of search, determine in region of search Region of search model p (X), determines that method is as follows:
The present invention use the yardstick of the square statistical nature target to comprising in image be described, for image pixel value For f, ((p+q) rank moment characteristics in this region is defined as x, region y)
Mpq=∫ ∫ xp*yq*F (x, y) dxdy, p, q=0,1 ... ∞ (5)
Defining according to moment characteristics, the coordinate of the centre coordinate of target area with pixel set is substituted into formula (5) can ask Obtain the second moment μ of target area02、μ11、μ20, construct covariance matrixCovariance matrix is carried out singular value divide Solution obtains its eigenvalue:
C o v = U * S * U T = u 11 u 12 u 21 u 22 * λ 1 2 0 0 λ 2 2 * u 11 u 12 u 21 u 22 T - - - ( 6 )
In formula:
λ12The eigenvalue of target area moment characteristics;
The matrix that the characteristic vector that U feature decomposition obtains is constituted;
UTThe transposed matrix of matrix U;
The middle transformed matrix that S obtains after decomposing Cov battle array;
(u11,u21)T, (u12,u22)TIt is respectively the eigenvalue λ of moment characteristics12Characteristic vector;
The matrix S now obtained by formula (6) characterizes the dimensional information of target with ellipse, and SOAMS algorithm is with in ellipse The heart is initial point, rebuilds the coordinate system with oval long semi-minor axis as coordinate axes, i.e. Principal axes, tries to achieve current oval length The yardstick of semi-minor axis is a and b, due to a, b and λ12All characterize the long semi-minor axis of target, then have λ12≈ a/b, if a=k* λ1, b=k* λ2, wherein k is scale factor, according to target area area A and ellipse area formula S=π * a*b, has A=π * a*b =π * (k* λ1)*(k*λ2), two formula simultaneous can be obtained k and be
k = A / ( π * λ 1 * λ 2 ) - - - ( 7 )
Now obtain a, b, then the form under Cov battle array can be converted into Principal axes:
Cov 1 = U * a 2 0 0 b 2 * U T - - - ( 8 )
The covariance matrix Cov of the neotectonics obtained by formula (8)1Increment Δ d can be used to determine, and sign target is at next The covariance matrix Cov of the region of search yardstick in frame2Such as formula (9), the increment modulus value used in the present invention is 5 pixels:
Cov 2 = U * ( a + Δ d ) 2 0 0 ( b + Δ d ) 2 * U T - - - ( 9 )
In formula:
Δ d increment, in order to the size variation in command deployment region;
A target area major semiaxis;
B target area semi-minor axis;
Now meet the pixel point set X of following formula then for the region of search coordinate in next frame
( X - X o ) * Cov 2 - 1 * ( X - X o ) T ≤ 1 - - - ( 10 )
Target search region is by collection of pixels X'={x' in target search regioni}I=1,2 ... N', composition, its center is remembered For Xo', N' represents the number of the pixel in target search region, represents the aspect indexing in target characteristic space, then mesh with fe Mark region of search can be modeled as:
p ( X ) = { p f e } f e = 1 , 2 , ... , N ′ p f e ( X ) = C h Σ i = 1 N ′ k ( | | x i ′ - X o ′ h | | 2 ) δ [ b ( x i ′ ) - f e ] - - - ( 11 )
In formula:
P (X) target search regional model;
pfe(X) the fe feature probability on object module;
b(xi') color feature value at respective pixel;
ChNormalization coefficient, calculating formula is as follows:
C h = 1 Σ i = 1 N ′ k ( | | x i ′ - X o ′ h | | 2 ) ; - - - ( 12 )
3rd step: tried to achieve candidate region in region of search by mean shift algorithm, calculates candidate region feature
Object module q and region of search model p is being obtained by the first step, second stepfe(X) after, by mean shift algorithm, meter Calculate such as formula (13) mean shift vectors Xo, X after iterative search terminates each time in a search missionoValue will be assigned to iteration Starting point X'oCarry out next iteration search, X after search mission terminatesoThe barycenter in the value optimum search region for obtaining is sat Mark, now optimum search region is object candidate area, and the mean shift vectors shown in formula (13) is by calculating target candidate Pixel x' in regioniWith iteration starting point X'oThe ratio of Weighted distance and the zeroth order square of region of search obtain, wherein weight wi It is under the jurisdiction of the probability of object module, in candidate region corresponding to ith pixel for ith pixel characteristic of correspondence in region of search Feature by feature calculation function b (xi) determine;
X o = Σ i = 1 N ′ x i w i g ( | | X o ′ - x i ′ h | | 2 ) Σ i = 1 N ′ w i g ( | | X o - x i ′ h | | 2 ) - - - ( 13 )
w i = Σ f e = 1 N ′ q f e p f e ( X ) δ [ b ( x i ) - f e ] - - - ( 14 )
In formula:
X'oIteration starting point
x'iPixel in region of search
wiThe weights that in region of search, each pixel is corresponding
G (x) Weighted Kernel function, for the weights that the pixel distribution remote away from center is less;
4th step: the Pasteur's coefficient obtained by object candidate area feature and object candidate area area N, calls the present invention The rescaling mechanism based on apparent information with contextual information proposed, is adjusted the area of object candidate area, The operation being adjusted is: the target area first determined by previous moment and the characteristics of motion of target determine current estimation face Long-pending M;Yardstick N according to current goal candidate region and target currently estimate yardstick M, it is determined that currently adjust type;If current chi Degree relative importance value is the highest, then use sectional type function, determines that what formula of use calculates by calculating the numerical value of N/M, and substitutes into Pasteur system Number calculates with N, M variable, otherwise, then use learning-oriented function, determine that what formula of use calculates base by calculating the numerical value of N/M Plinth output ref, and determine a need for carrying out blocking or additional gain with the numerical value of N/M according to the output of ref;If really need to add Gain is then determined by gain;
Described rescaling mechanism based on apparent information with contextual information comprises the steps:
4.1, the determination of adjustment type based on contextual information
The Pasteur's coefficient assuming current goal candidate region is Bi, in former frame, Pasteur's coefficient of target area is Bi-1, when Front object candidate area area is N, according to target scale in former frame, obtains the estimation of current goal area via affine transformation Value M;
The determination adjusting type is determined by the magnitude relationship of N with M, if N >=M, represents that object candidate area area N is big In estimated value M equal to current goal area, need to choose the appearance features of target in object candidate area, use appearance features Set up feature distribution histogram, calculate apparent characteristic similarity and object candidate area area N is carried out reduction operation to meet mesh The true yardstick of target;Otherwise, if N < M, then it represents that the area N of object candidate area is less than estimated value M of current goal area, then Need to choose the appearance features of target in object candidate area, set up feature distribution histogram by appearance features, carry out computational chart See characteristic similarity and object candidate area area N is amplified operation with the true yardstick meeting target;
4.2, dimension calculation based on apparent information Yu contextual information
Cause the excessive target location caused in adjacent video frames of variation, target location non-overlapping if there is camera lens drift And in tracing task with maximize area coverage preferential time, use learning-oriented function;If lens location is fixed and with tracing task With minimize rea adjusting error preferential time, use Segment Index function;After Tuning function type determines, call the adjustment of correspondence Function calculates according to Pasteur's coefficient in current candidate region;
When 4.2.1 using learning-oriented function, if adjusting type to be defined as reduction operation according to step one, then according to current mesh Pasteur's coefficient B of mark candidate regioniCalculate yardstick;If adjusting type to be defined as amplifieroperation according to step one, then according to current Pasteur's coefficient B of object candidate areaiWith Pasteur's coefficient B of target area in former framei-1Calculating, target scale is carried out Reduce and amplify adjust step as follows:
(1) call basis Tuning function, determine that basis output, as output reference value, calls formula (15), according to N's Yu M Ratio is calculated output reference value ref by formula (15);
(2) call formula (16) output reference value ref is judged:
If N/M>=1.1 and ref<1, then the value of regulation coefficient c is the value of now ref;
If N/M >=1.1 and ref > 1, then the numerical value of ref is blocked, the present invention use cutoff value be 0.6, i.e. this Time regulation coefficient c be 0.6;
If N/M < 1.1, then ref is amplified by the gain magnification function calling formula (17), now by the ratio of Pasteur's coefficient (Bi/Bi-1) determine the current gain size amplification coefficient k to ref;
r e f = 1.1 , N / M < = 1.1 M a x ( 0.8 , 9.5 * t e m p ) , e l s e t e m p = p 1 * B i 2 + p 2 * B i + p 3 B i 2 + q 1 * B i + q 2 p 1 = 2272 , p 2 = - 3346 , p 3 = 1249 , q 1 = - 3300 , q 2 = 3298 - - - ( 15 )
c = 0.6 , r e f > 1 a n d N / M > = 1.1 r e f , r e f < 1 a n d N / M > = 1.1 k * r e f , e l s e - - - ( 16 )
k = 3.3 , B i / B i - 1 > = 1.2 2.8 , e l s e - - - ( 17 )
Wherein, temp is interim fitting function, and p1, p2, p3, q1 and q2 are respectively learning function fitting coefficient, BiIt is i-th Pasteur's coefficient in frame, Bi-1Being the Pasteur's coefficient in the i-th-1 frame, k is gain coefficient, exports based on ref;
(3) correction area A=c*N can be calculated after obtaining correction coefficient c;
When 4.2.2 using Segment Index function:
(1) according to currently adjusting type, call the Segment Index type Tuning function shown in formula (18), determine regulation coefficient c:
c = B i ^ ( N / ( 1.1 * M ) ) , N &GreaterEqual; M m i n ( B i ^ ( N M - 1 ) , 1.1 ) , N < M - - - ( 18 )
B in formulaiIt it is the Pasteur's coefficient in the i-th frame;
(2) correction coefficient c obtained by the step (1) of step 4.2.2, can be calculated correction area A=c*N;
5th step: calculate current covariance matrix, labelling target, determines the region of search of target, covariance matrix in next frame The calculation procedure of the long semi-minor axis required with during labelling target is identical with second step, repeat the 3rd step to the 5th step until with Track labelling task terminates.
The invention has the beneficial effects as follows the contextual information by introducing target scale in Regulation mechanism, apparent considering On the basis of information, carry out careful classification with the change of yardstick contextual information to adjusting type, thus improve former algorithm pair Target scale Adjusting accuracy on target area numerical value with area of effective coverage, can be effectively improved former SOAMS algorithm to mesh The Adjusting accuracy of scale.
Accompanying drawing explanation
Fig. 1 is SOAMS algorithm flow block diagram used in the present invention.
Fig. 2 is image object dimension regulation method flow chart of the present invention.
Fig. 3 is embodiment of the present invention result exemplary plot.
Fig. 4 is the rea adjusting rate error performance contrast of new regulatory mechanism of the present invention and former SOAMS Tuning function Figure, Fig. 4 (a), Fig. 4 (b) and Fig. 4 (c) are respectively new regulatory mechanism of the present invention with former Tuning function to cycle tests Rea adjusting rate error performance curve.
Fig. 5 is the coverage rate performance comparison figure of new regulatory mechanism of the present invention and former SOAMS Tuning function, Fig. 5 A (), Fig. 5 (b) and Fig. 5 (c) are respectively regulatory mechanism of the present invention forthright to the covering of cycle tests with former Tuning function Can curve.
Detailed description of the invention
The present invention is further described with embodiment below in conjunction with the accompanying drawings.
The contextual information of traditional method of adjustment based on apparent information with target scale is combined by the present invention, To method of adjustment based on apparent information Yu history dimensional information, it is more beneficial for the adjustment of target scale.
When being evaluated adjustment result, the present invention have chosen two kinds of evaluation indexes.
1. area occupation ratio error (area rates error) evaluates correction function with the ratio of labelling area with true value area Regulating power to target label frame size, formula is designated as:
a r e a r a t e e r r o r = s t r a c k i n g s g r o u n d t r u t h - 1 - - - ( 19 )
In formula:
StrackingThe region area that algorithm obtains after revising step;
SgroundtruthThe true area size of target.
2. coverage rate (overlap rate) is with marked region and the overlapping area of real estate and the ratio of true area Carrying out the effectiveness in evaluation mark region, computing formula is designated as:
o v e r l a p r a t e = s t r a c k i n g &cap; s g r o u n d t h s g r o u n d t h - - - ( 20 )
The following is based on the machine-processed enforcement under learning-oriented Tuning function with the rescaling of contextual information of apparent information Example:
Cycle tests part used by experimental example of the present invention takes from PaFiSS data base, and the target in cycle tests has bright Aobvious yardstick, the change in direction.Numbered sequence 1, sequence 2, sequence 4 in the storehouse of cycle tests, image size is 720 × 576, sequence 1 frame Han image 196 altogether, sequence 2 frame Han image 186 altogether, sequence 4 frame Han image 172 altogether.Yardstick is done for these data Adaptive tracing, specifically comprises the following steps that
The first step: determine target in video sequence initial frame and calculate object module
Object module is by the collection of pixels { x with target's center as initial pointi}I=1,2 ... N, composition, N represents target area The number of pixel, represents the aspect indexing in target characteristic space with fe, then target can be modeled as:
q = { q f e } f e = 1 , 2 , ... , M q f e = C &Sigma; i = 1 N k ( | | x i h | | 2 ) &delta; &lsqb; b ( x i ) - f e &rsqb; - - - ( 3 )
In formula:
Q object module;
qfeThe fe feature probability on object module;
M feature space number of components
δ selects function;
b(xi) color feature value at respective pixel;
K (x) Weighted Kernel function, for the weights that the pixel distribution remote away from center is less;
||xi||2Calculate pixel modulus value, characterize the pixel distance away from center;
C normalization coefficient, calculating formula is as follows
C = 1 &Sigma; i = 1 N k ( | | x i h | | 2 ) ; - - - ( 4 )
H kernel function window width, characterizes the size following the tracks of forms;
Second step: calculate square statistical nature and the variance matrix of target area, determine region of search, determine in region of search Region of search model p (X), determines that method is as follows:
The square statistical nature of image is to picture shape, effective description of yardstick, employs square system the most in the present invention Meter the feature target to comprising in image yardstick be described, for image pixel value be f (x, region y), this region (p+q) rank moment characteristics is defined as
Mpq=∫ ∫ xp*yq*F (x, y) dxdy, p, q=0,1 ... ∞ (5)
Defining according to moment characteristics, the coordinate of the centre coordinate of target area with pixel set is substituted into formula (5) can ask Obtain the second moment μ of target area02、μ11、μ20, and construct covariance matrixCovariance matrix now is Description to target area, in order to embody the scale feature of target the most intuitively, needs that covariance matrix is carried out singular value and divides Solution obtains its eigenvalue:
C o v = U * S * U T = u 11 u 12 u 21 u 22 * &lambda; 1 2 0 0 &lambda; 2 2 * u 11 u 12 u 21 u 22 T - - - ( 6 )
In formula:
λ12The eigenvalue of target area moment characteristics;
The matrix that the characteristic vector that U feature decomposition obtains is constituted;
UTThe transposed matrix of matrix U;
The middle transformed matrix that S obtains after decomposing Cov battle array
(u11,u21)T, (u12,u22)TIt is respectively the eigenvalue λ of moment characteristics12Characteristic vector;
Moment characteristics character according to image, the matrix S now obtained by formula (6) characterizes the yardstick letter of target with ellipse Breath, SOAMS algorithm, with elliptical center as initial point, rebuilds the coordinate system with oval long semi-minor axis as coordinate axes, i.e. basis and sits Mark system, the yardstick trying to achieve current oval long semi-minor axis is a, b, due to a, b and λ12All characterize the long semi-minor axis of target, then Must there is λ12≈ a/b, if a=k* is λ1, b=k* λ2Wherein k is scale factor, public according to target area area A and ellipse area Formula S=π * a*b has A=π * a*b=π * (k* λ1)*(k*λ2), two formula simultaneous can be obtained k and be
k = A / ( &pi; * &lambda; 1 * &lambda; 2 ) - - - ( 7 )
Now obtain a, b, then have a form that Cov battle array is converted under Principal axes:
Cov 1 = U * a 2 0 0 b 2 * U T - - - ( 8 )
The covariance matrix Cov of the neotectonics obtained by formula (8)1Increment Δ d can be used to determine that target is in the next frame Region of search be such as formula (9), by increment and Cov1May determine that and characterize the yardstick in target search region in next frame Covariance matrix Cov2.Scene, the movement velocity of target that increment herein can be used according to algorithm by designer are fixed voluntarily Justice, the speed of service is the fastest, and the increment chosen should have bigger modulus value, and the increment modulus value used in the present invention is 5 pictures Element:
Cov 2 = U * ( a + &Delta; d ) 2 0 0 ( b + &Delta; d ) 2 * U T - - - ( 9 )
In formula:
Δ d increment, in order to the size variation in command deployment region;
A target area major semiaxis;
B target area semi-minor axis;
Self-defining increment Δ d can be used obtained the covariance matrix of neotectonics by formula (8) after to be determined target by formula (9) The yardstick of region of search in the next frame, and it is calculated each point coordinate set in this region of search by formula (10).
Now meet the pixel point set X of following formula then for the region of search coordinate in next frame
( X - X o ) * Cov 2 - 1 * ( X - X o ) T &le; 1 - - - ( 10 )
Target search region is by collection of pixels X={x in target search regioni}I=1,2 ... N', composition, its center is designated as Xo', N' represents the number of the pixel in target search region, represents the aspect indexing in target characteristic space, then target with fe Region of search can be modeled as:
p ( X ) = { p f e } f e = 1 , 2 , ... , N &prime; p f e ( X ) = C h &Sigma; i = 1 N &prime; k ( | | x i &prime; - X o &prime; h | | 2 ) &delta; &lsqb; b ( x i &prime; ) - f e &rsqb; - - - ( 11 )
In formula:
P (X) target search regional model;
pfe(X) the fe feature probability on object module;
b(xi') color feature value at respective pixel;
ChNormalization coefficient, calculating formula is as follows:
C h = 1 &Sigma; i = 1 N &prime; k ( | | x i &prime; - X o &prime; h | | 2 ) ; - - - ( 12 )
3rd step: SOAMS algorithm flow block diagram as shown in Figure 1, is tried to achieve candidate by mean shift algorithm in region of search Region, calculates candidate region feature
Object module q and region of search model p is being obtained by the first step, second stepfe(X) after, by mean shift algorithm, meter Calculate such as formula (13) mean shift vectors Xo, X after iterative search procedures terminates each time in a search missionoValue will be assigned to Iteration starting point X'oCarrying out next iteration search, search mission terminates rear XoThe barycenter in the value optimum search region for obtaining is sat Mark, now optimum search region is object candidate area, and the mean shift vectors shown in formula (13) is by calculating target candidate Pixel x' in regioniWith iteration starting point X'oThe ratio of Weighted distance and the zeroth order square of region of search obtain, wherein weight wi It is under the jurisdiction of the probability of object module, in candidate region corresponding to ith pixel for ith pixel characteristic of correspondence in candidate region Feature by feature calculation function b (xi) determine;
X o = &Sigma; i = 1 N &prime; x i w i g ( | | X o &prime; - x i &prime; h | | 2 ) &Sigma; i = 1 N &prime; w i g ( | | X o - x i &prime; h | | 2 ) - - - ( 13 )
w i = &Sigma; f e = 1 N &prime; q f e p f e ( X ) &delta; &lsqb; b ( x i ) - f e &rsqb; - - - ( 14 )
In formula:
X'oIteration starting point
x'iPixel in region of search
wiThe weights that in region of search, each pixel is corresponding
G (x) Weighted Kernel function, for the weights that the pixel distribution remote away from center is less;
4th step: perform the area correction link in the target scale method of adjustment flow chart of the present invention shown in Fig. 2, by mesh Pasteur's coefficient that mark candidate region feature obtains and object candidate area area N, call proposed by the invention based on apparent letter Breath and the rescaling mechanism of contextual information, be adjusted the area of object candidate area, and the operation being adjusted is: first The target area first determined by previous moment and the characteristics of motion of target determine current estimated area M;Wait according to current goal The yardstick N of favored area and target currently estimate yardstick M, it is determined that currently adjust type;If current scale relative importance value is the highest, then use By calculating the numerical value of N/M, sectional type function, determines that what formula of use calculates, and substitute into Pasteur's coefficient and calculate with N, M variable, Otherwise, then use learning-oriented function, determine that what formula of use calculates basis output ref by calculating the numerical value of N/M, and according to ref Output determine a need for carrying out blocking or additional gain with the numerical value of N/M;If really needing additional gain, gain is carried out really Fixed;
The rescaling mechanism based on apparent information with contextual information of the present invention comprises the steps:
4.1, the determination of adjustment type based on contextual information
The Pasteur's coefficient assuming current goal candidate region is Bi, in former frame, Pasteur's coefficient of target area is Bi-1, when Front object candidate area area is N, according to target scale in former frame, obtains the estimation of current goal area via affine transformation Value M;
The determination adjusting type is determined by the magnitude relationship of N with M, if N >=M, represents that object candidate area area N is big In estimated value M equal to current goal area, need to choose the appearance features of target in object candidate area, use appearance features Set up feature distribution histogram, calculate apparent characteristic similarity and object candidate area area N is carried out reduction operation to meet mesh The true yardstick of target;Otherwise, if N < M, then it represents that the area N of object candidate area is less than estimated value M of current goal area, then Need to choose the appearance features of target in object candidate area, set up feature distribution histogram by appearance features, carry out computational chart See characteristic similarity and object candidate area area N is amplified operation with the true yardstick meeting target;
4.2, dimension calculation based on apparent information Yu contextual information
Cause the excessive target location caused in adjacent video frames of variation, target location non-overlapping if there is camera lens drift And in tracing task with maximize area coverage preferential time, use learning-oriented function;If lens location is fixed and with tracing task With minimize rea adjusting error preferential time, use Segment Index function;After Tuning function type determines, call the adjustment of correspondence Function calculates according to Pasteur's coefficient in current candidate region;
When 4.2.1 using learning-oriented function, if adjusting type to be defined as reduction operation according to step 4.1, then according to current Pasteur's coefficient B of object candidate areaiCalculate yardstick;If adjusting type to be defined as amplifieroperation according to step one, then according to working as Pasteur's coefficient B of front object candidate areaiWith Pasteur's coefficient B of target area in former framei-1Calculating, target scale enters Row reduce and amplifies adjustment step as follows:
(1) call basis Tuning function, determine basis output as output reference value, according to the ratio of N Yu M by formula (15) Calculate output reference value ref;
(2) call formula (16) output reference value ref is judged:
If N/M>=1.1 and ref<1, then the value of regulation coefficient c is the value of now ref;
If N/M >=1.1 and ref > 1, then the numerical value of ref is blocked, the present invention use cutoff value be 0.6, i.e. this Time regulation coefficient c be 0.6;
If N/M < 1.1, then ref is amplified by the gain magnification function calling formula (17), now by the ratio of Pasteur's coefficient (Bi/Bi-1) determine the current gain size amplification coefficient k to ref;
r e f = 1.1 , N / M < = 1.1 M a x ( 0.8 , 9.5 * t e m p ) , e l s e t e m p = p 1 * B i 2 + p 2 * B i + p 3 B i 2 + q 1 * B i + q 2 p 1 = 2272 , p 2 = - 3346 , p 3 = 1249 , q 1 = - 3300 , q 2 = 3298 - - - ( 15 )
c = 0.6 , r e f > 1 a n d N / M > = 1.1 r e f , r e f < 1 a n d N / M > = 1.1 k * r e f , e l s e - - - ( 16 )
k = 3.3 , B i / B i - 1 > = 1.2 2.8 , e l s e - - - ( 17 )
Wherein, temp is interim fitting function, and p1, p2, p3, q1 and q2 are respectively learning function fitting coefficient, BiIt is i-th Pasteur's coefficient in frame, Bi-1Being the Pasteur's coefficient in the i-th-1 frame, k is gain coefficient, exports based on ref;
(3) correction area A=c*N can be calculated after obtaining correction coefficient c;
When 4.2.2 using Segment Index function:
(1) according to currently adjusting type, call the Segment Index type Tuning function shown in formula (18), determine regulation coefficient c:
c = B i ^ ( N / ( 1.1 * M ) ) , N &GreaterEqual; M m i n ( B i ^ ( N M - 1 ) , 1.1 ) , N < M - - - ( 18 )
B in formulaiIt it is the Pasteur's coefficient in the i-th frame;
(2) correction coefficient c obtained by the step (1) of step 4.2.2, can be calculated correction area A=c*N;
5th step: calculate current covariance matrix, labelling target, determines the region of search of target, covariance matrix in next frame The calculation procedure of the long semi-minor axis required with during labelling target is identical with second step, repeat the 3rd step to the 5th step until with Track labelling task terminates.
Result example such as Fig. 3, Fig. 4 that above-described embodiment obtains is that regulatory mechanism of the present invention adjusts with former SOAMS The rea adjusting rate error performance comparison diagram of function, Fig. 4 (a), Fig. 4 (b) and Fig. 4 (c) are respectively regulation machine of the present invention Regulation mechanism processed and the former SOAMS Tuning function rea adjusting rate error performance curve chart to cycle tests.Fig. 5 is institute of the present invention The regulatory mechanism stated and the former SOAMS Tuning function coverage rate performance chart to cycle tests, Fig. 5 (a), Fig. 5 (b) and Fig. 5 C () is respectively Regulation mechanism of the present invention bent to the rea adjusting rate error performance of cycle tests with former SOAMS Tuning function Line chart.
Table 1 regulation error performance fluctuation index
SOAMS Learning-oriented function Segment Index function
Sequence 1 0.6290 1.1945 0.1099
Sequence 2 0.1071 0.1849 0.0409
Sequence 4 0.2923 0.0802 0.0595
Table 2 coverage rate performance inconsistency index
SOAMS Learning-oriented function Segment Index function
Sequence 1 0.2195 0.0555 0.2040
Sequence 3 0.1075 0.0564 0.0721
Sequence 4 0.2511 0.0354 0.0597
Table 1 and table 2 are that two kinds of Tuning function included by method of adjustment proposed by the invention exist with former method of adjustment Independent performance inconsistency index on PaFiSS data set, wherein fluctuation refers to the performance data variance relative to reference value, the ginseng of table 1 Examining value is 0, and the reference value of table 2 is 1, and wherein runic represents optimum desired value, it can be seen that adjustment side proposed by the invention Method can obtain more credible and closer to actual value adjustment result compared to former algorithm.

Claims (1)

1. a target scale adaptive tracking method based on context, it is characterised in that comprise the steps:
The first step: determine target in video sequence initial frame and calculate object module
Object module is by the collection of pixels { x with target's center as initial pointi}I=1,2 ... N, composition, N represents the pixel of target area Number, represent the aspect indexing in target characteristic space with fe, then target can be modeled as:
q = { q f e } f e = 1 , 2 , ... , M q f e = C &Sigma; i = 1 N k ( | | x i h | | 2 ) &delta; &lsqb; b ( x i ) - f e &rsqb; - - - ( 3 )
In formula:
Q object module;
qfeThe fe feature probability on object module;
M feature space number of components
δ selects function;
b(xi) color feature value at respective pixel;
K (x) Weighted Kernel function, for the weights that the pixel distribution remote away from center is less;
||xi||2Calculate pixel modulus value, characterize the pixel distance away from center;
C normalization coefficient, calculating formula is as follows
C = 1 &Sigma; i = 1 N k ( | | x i h | | 2 ) ; - - - ( 4 )
H kernel function window width, characterizes the size following the tracks of forms;
Second step: calculate square statistical nature and the variance matrix of target area, determine region of search, determine search in region of search Regional model p (X), determines that method is as follows:
The present invention uses the yardstick of the square statistical nature target to comprising in image be described, be f for image pixel value ((p+q) rank moment characteristics in this region is defined as x, region y)
Mpq=∫ ∫ xp*yq* f (x, y) dxdy, p, q=0,1 ... ∞ (5)
Define according to moment characteristics, the coordinate of the centre coordinate of target area with pixel set is substituted into formula (5) mesh can be tried to achieve The second moment μ in mark region02、μ11、μ20, construct covariance matrixCovariance matrix is carried out singular value decomposition obtain To its eigenvalue:
C o v = U * S * U T = u 11 u 12 u 21 u 22 * &lambda; 1 2 0 0 &lambda; 2 2 * u 11 u 12 u 21 u 22 T - - - ( 6 )
In formula:
λ12The eigenvalue of target area moment characteristics;
The matrix that the characteristic vector that U feature decomposition obtains is constituted;
UTThe transposed matrix of matrix U;
The middle transformed matrix that S obtains after decomposing Cov battle array;
(u11,u21)T, (u12,u22)TIt is respectively the eigenvalue λ of moment characteristics12Characteristic vector;
The matrix S now obtained by formula (6) characterizes the dimensional information of target with ellipse, and SOAMS algorithm with elliptical center is Initial point, rebuilds the coordinate system with oval long semi-minor axis as coordinate axes, i.e. Principal axes, tries to achieve current oval length half The yardstick of axle is a and b, due to a, b and λ12All characterize the long semi-minor axis of target, then have λ12≈ ab, if a=k* is λ1, b= k*λ2, wherein k is scale factor, according to target area area A and ellipse area formula S=π * a*b, has A=π * a*b=π * (k*λ1)*(k*λ2), two formula simultaneous can be obtained k and be
k = A / ( &pi; * &lambda; 1 * &lambda; 2 ) - - - ( 7 )
Now obtain a, b, then the form under Cov battle array can be converted into Principal axes:
Cov 1 = U * a 2 0 0 b 2 * U T - - - ( 8 )
The covariance matrix Cov of the neotectonics obtained by formula (8)1Increment Δ d can be used to determine and to characterize target in the next frame The covariance matrix Cov of region of search yardstick2Such as formula (9), the increment modulus value used in the present invention is 5 pixels:
Cov 2 = U * ( a + &Delta; d ) 2 0 0 ( b + &Delta; d ) 2 * U T - - - ( 9 )
In formula:
Δ d increment, in order to the size variation in command deployment region;
A target area major semiaxis;
B target area semi-minor axis;
Now meet the pixel point set X of following formula then for the region of search coordinate in next frame
( X - X o ) * Cov 2 - 1 * ( X - X o ) T &le; 1 - - - ( 10 )
Target search region is by collection of pixels X'={x' in target search regioni}I=1,2 ... N', composition, its center is designated as Xo', N' represents the number of the pixel in target search region, represents the aspect indexing in target characteristic space, then target with fe Region of search can be modeled as:
p ( X ) = { p f e } f e = 1 , 2 , ... , N &prime; p f e ( X ) = C h &Sigma; i = 1 N &prime; k ( | | x i &prime; - X o &prime; h | | 2 ) &delta; &lsqb; b ( x i &prime; ) - f e &rsqb; - - - ( 11 )
In formula:
P (X) target search regional model;
pfe(X) the fe feature probability on object module;
b(xi') color feature value at respective pixel;
ChNormalization coefficient, calculating formula is as follows:
C h = 1 &Sigma; i = 1 N &prime; k ( | | x i &prime; - X o &prime; h | | 2 ) ; - - - ( 12 )
3rd step: tried to achieve candidate region in region of search by mean shift algorithm, calculates candidate region feature
Object module q and region of search model p is being obtained by the first step, second stepfe(X) after, by mean shift algorithm, calculate such as Formula (13) mean shift vectors Xo, X after iterative search terminates each time in a search missionoValue will be assigned to iteration and initiate Point X'oCarry out next iteration search, X after search mission terminatesoThe center-of-mass coordinate in the value optimum search region for obtaining, this Time optimum search area coincidence be object candidate area, the mean shift vectors shown in formula (13) by calculate target candidate district Pixel x' in territoryiWith iteration starting point X'oThe ratio of Weighted distance and the zeroth order square of region of search obtain, wherein weight wiFor In candidate region, ith pixel characteristic of correspondence is under the jurisdiction of the probability of object module, in candidate region corresponding to ith pixel Feature is by feature calculation function b (xi) determine;
X o = &Sigma; i = 1 N &prime; x i w i g ( | | X o &prime; - x i &prime; h | | 2 ) &Sigma; i = 1 N &prime; w i g ( | | X o - x i &prime; h | | 2 ) - - - ( 13 )
w i = &Sigma; f e = 1 N &prime; q f e p f e ( X ) &delta; &lsqb; b ( x i ) - f e &rsqb; - - - ( 14 )
In formula:
X'oIteration starting point
x'iPixel in region of search
wiThe weights that in region of search, each pixel is corresponding
G (x) Weighted Kernel function, for the weights that the pixel distribution remote away from center is less;
4th step: the Pasteur's coefficient obtained by object candidate area feature and object candidate area area N, calls the present invention and is carried The rescaling mechanism based on apparent information with contextual information gone out, is adjusted the area of object candidate area, carries out The operation adjusted is: the target area first determined by previous moment and the characteristics of motion of target determine current estimated area M; Yardstick N according to current goal candidate region and target currently estimate yardstick M, it is determined that currently adjust type;If current scale is excellent First spend the highest, then use sectional type function, determine that what formula of use calculates by calculating the numerical value of N/M, and substitute into Pasteur's coefficient with N, M variable calculates, otherwise, then use learning-oriented function, determine that what formula calculating basis of use is defeated by calculating the numerical value of N/M Go out ref, and determine a need for carrying out blocking or additional gain with the numerical value of N/M according to the output of ref;If really needing additional gain Then gain is determined;
Described rescaling mechanism based on apparent information with contextual information comprises the steps:
4.1, the determination of adjustment type based on contextual information
The Pasteur's coefficient assuming current goal candidate region is Bi, in former frame, Pasteur's coefficient of target area is Bi-1, current mesh Mark candidate region area is N, according to target scale in former frame, obtains estimated value M of current goal area via affine transformation;
The determination adjusting type is determined by the magnitude relationship of N with M, if N >=M, represents that object candidate area area N is more than In estimated value M of current goal area, need to choose the appearance features of target in object candidate area, set up by appearance features Feature distribution histogram, calculates apparent characteristic similarity and object candidate area area N is carried out reduction operation to meet target True yardstick;Otherwise, if N < M, then it represents that the area N of object candidate area less than estimated value M of current goal area, then needs In object candidate area, choose the appearance features of target, set up feature distribution histogram by appearance features, calculate apparent spy Levy similarity and object candidate area area N is amplified operation with the true yardstick meeting target;
4.2, dimension calculation based on apparent information Yu contextual information
If exist camera lens drift cause target location change the excessive target location caused in adjacent video frames non-overlapping and with In track task with maximize area coverage preferential time, use learning-oriented function;If lens location is fixed and with tracing task with When littleization rea adjusting error is preferential, use Segment Index function;After Tuning function type determines, call the Tuning function of correspondence Calculate according to Pasteur's coefficient in current candidate region;
When 4.2.1 using learning-oriented function, if adjusting type to be defined as reduction operation according to step one, then wait according to current goal Pasteur's coefficient B of favored areaiCalculate yardstick;If adjusting type to be defined as amplifieroperation according to step one, then according to current goal Pasteur's coefficient B of candidate regioniWith Pasteur's coefficient B of target area in former framei-1Calculating, target scale reduces The step adjusted is as follows with amplifying:
(1) call basis Tuning function, determine basis output, i.e. export reference value, calculated by formula (15) according to the ratio of N Yu M Output reference value ref;
(2) call formula (16) output reference value ref is judged:
If N/M>=1.1 and ref<1, then the value of regulation coefficient c is the value of now ref;
If N/M >=1.1 and ref > 1, then the numerical value of ref is blocked, the cutoff value that the present invention uses is 0.6, the most now Regulation coefficient c is 0.6;
If N/M < 1.1, then ref is amplified by the gain magnification function calling formula (17), now by the ratio (B of Pasteur's coefficienti/ Bi-1) determine the current gain size amplification coefficient k to ref;
r e f = 1.1 , N / M < = 1.1 M a x ( 0.8 , 9.5 * t e m p ) , e l s e t e m p = p 1 * B i 2 + p 2 * B i + p 3 B i 2 + q 1 * B i + q 2 p 1 = 2272 , p 2 = - 3346 , p 3 = 1249 , q 1 = - 3300 , q 2 = 3298 - - - ( 15 )
c = 0.6 , r e f > 1 a n d N / M > = 1.1 r e f , r e f < 1 a n d N / M > = 1.1 k * r e f , e l s e - - - ( 16 )
k = 3.3 , B i / B i - 1 > = 1.2 2.8 , e l s e - - - ( 17 )
Wherein, temp is interim fitting function, and p1, p2, p3, q1 and q2 are respectively learning function fitting coefficient, BiIt is in the i-th frame Pasteur's coefficient, Bi-1Being the Pasteur's coefficient in the i-th-1 frame, k is gain coefficient, exports based on ref;
(3) correction area A=c*N can be calculated after obtaining correction coefficient c;
When 4.2.2 using Segment Index function:
(1) according to currently adjusting type, call the Segment Index type Tuning function shown in formula (18), determine regulation coefficient c:
B in formulaiIt it is the Pasteur's coefficient in the i-th frame;
(2) correction coefficient c obtained by the step (1) of step 4.2.2, can be calculated correction area A=c*N;
5th step: calculate current covariance matrix, labelling target, determines the region of search of target, covariance matrix and mark in next frame The calculation procedure of long semi-minor axis required during note target is identical with second step, repeats the 3rd step to the 5th step until following the tracks of mark Note task terminates.
CN201610502966.4A 2016-06-30 2016-06-30 Target scale adaptive tracking method based on context Pending CN106127811A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610502966.4A CN106127811A (en) 2016-06-30 2016-06-30 Target scale adaptive tracking method based on context

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610502966.4A CN106127811A (en) 2016-06-30 2016-06-30 Target scale adaptive tracking method based on context

Publications (1)

Publication Number Publication Date
CN106127811A true CN106127811A (en) 2016-11-16

Family

ID=57285696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610502966.4A Pending CN106127811A (en) 2016-06-30 2016-06-30 Target scale adaptive tracking method based on context

Country Status (1)

Country Link
CN (1) CN106127811A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194951A (en) * 2017-05-02 2017-09-22 中国科学院大学 Method for tracking target based on restricted structure graph search
CN109274964A (en) * 2018-11-09 2019-01-25 北京奇艺世纪科技有限公司 A kind of video lens type information modification method and device
CN109523573A (en) * 2018-11-23 2019-03-26 上海新世纪机器人有限公司 The tracking and device of target object
CN109685825A (en) * 2018-11-27 2019-04-26 哈尔滨工业大学(深圳) Local auto-adaptive feature extracting method, system and storage medium for thermal infrared target tracking
CN110133641A (en) * 2019-04-19 2019-08-16 电子科技大学 A kind of through-wall imaging radar target tracking method of dimension self-adaption
CN112907630A (en) * 2021-02-06 2021-06-04 洛阳热感科技有限公司 Real-time tracking method based on mean shift prediction and space-time context information

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift
US20070237359A1 (en) * 2006-04-05 2007-10-11 Zehang Sun Method and apparatus for adaptive mean shift tracking
CN101369346A (en) * 2007-08-13 2009-02-18 北京航空航天大学 Tracing method for video movement objective self-adapting window
CN102074000A (en) * 2010-11-23 2011-05-25 天津市亚安科技电子有限公司 Tracking method for adaptively adjusting window width by utilizing optimal solution of variance rate
CN103413312A (en) * 2013-08-19 2013-11-27 华北电力大学 Video target tracking method based on neighborhood components analysis and scale space theory
CN103886324A (en) * 2014-02-18 2014-06-25 浙江大学 Scale adaptive target tracking method based on log likelihood image
CN105117720A (en) * 2015-09-29 2015-12-02 江南大学 Object scale self-adaption tracking method based on spatial-temporal model
US9213899B2 (en) * 2014-03-24 2015-12-15 International Business Machines Corporation Context-aware tracking of a video object using a sparse representation framework
CN105488815A (en) * 2015-11-26 2016-04-13 北京航空航天大学 Real-time object tracking method capable of supporting target size change

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift
US20070237359A1 (en) * 2006-04-05 2007-10-11 Zehang Sun Method and apparatus for adaptive mean shift tracking
CN101369346A (en) * 2007-08-13 2009-02-18 北京航空航天大学 Tracing method for video movement objective self-adapting window
CN102074000A (en) * 2010-11-23 2011-05-25 天津市亚安科技电子有限公司 Tracking method for adaptively adjusting window width by utilizing optimal solution of variance rate
CN103413312A (en) * 2013-08-19 2013-11-27 华北电力大学 Video target tracking method based on neighborhood components analysis and scale space theory
CN103886324A (en) * 2014-02-18 2014-06-25 浙江大学 Scale adaptive target tracking method based on log likelihood image
US9213899B2 (en) * 2014-03-24 2015-12-15 International Business Machines Corporation Context-aware tracking of a video object using a sparse representation framework
CN105117720A (en) * 2015-09-29 2015-12-02 江南大学 Object scale self-adaption tracking method based on spatial-temporal model
CN105488815A (en) * 2015-11-26 2016-04-13 北京航空航天大学 Real-time object tracking method capable of supporting target size change

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
J. NING 等: "Scale and orientation adaptive mean shift tracking", 《IET COMPUTER VISION》 *
JIFENG NING 等: "Scale and Orientation Adaptive Mean Shift Tracking", 《IET COMPUTER VISION》 *
ROBERT T. COLLINS: "Mean-shift Blob Tracking through Scale Space", 《PROCEEDINGS OF THE 2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR’03)》 *
TOMAS VOJIR 等: "Robust scale-adaptive mean-shift for tracking", 《PATTERN RECOGNITION LETTERS》 *
屈立安 等: "一种应用于分布式目标跟踪仿真系统的算法调用方法", 《计算机测量与控制》 *
田浩 等: "融合两层卡尔曼滤波和Mean Shift的自适应目标跟踪算法", 《武汉理工大学学报》 *
邹贽丞 等: "基于上下文信息的改进尺度自适应算法", 《中国科技论文在线》 *
陈胜蓝 等: "核窗宽自适应的均值偏移跟踪算法", 《湖南工业大学学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194951A (en) * 2017-05-02 2017-09-22 中国科学院大学 Method for tracking target based on restricted structure graph search
CN109274964A (en) * 2018-11-09 2019-01-25 北京奇艺世纪科技有限公司 A kind of video lens type information modification method and device
CN109523573A (en) * 2018-11-23 2019-03-26 上海新世纪机器人有限公司 The tracking and device of target object
CN109685825A (en) * 2018-11-27 2019-04-26 哈尔滨工业大学(深圳) Local auto-adaptive feature extracting method, system and storage medium for thermal infrared target tracking
CN110133641A (en) * 2019-04-19 2019-08-16 电子科技大学 A kind of through-wall imaging radar target tracking method of dimension self-adaption
CN110133641B (en) * 2019-04-19 2023-04-25 电子科技大学 Scale-adaptive through-wall imaging radar target tracking method
CN112907630A (en) * 2021-02-06 2021-06-04 洛阳热感科技有限公司 Real-time tracking method based on mean shift prediction and space-time context information

Similar Documents

Publication Publication Date Title
CN106127811A (en) Target scale adaptive tracking method based on context
CN106228185B (en) A kind of general image classifying and identifying system neural network based and method
CN109800692B (en) Visual SLAM loop detection method based on pre-training convolutional neural network
CN106127688B (en) A kind of super-resolution image reconstruction method and its system
CN106568445B (en) Indoor trajectory predictions method based on bidirectional circulating neural network
CN106022251B (en) The double interbehavior recognition methods of the exception of view-based access control model co-occurrence matrix sequence
Markos et al. Unsupervised deep learning for GPS-based transportation mode identification
CN106952288A (en) Based on convolution feature and global search detect it is long when block robust tracking method
CN105427308A (en) Sparse and dense characteristic matching combined image registration method
CN109035172A (en) A kind of non-local mean Ultrasonic Image Denoising method based on deep learning
CN107944354A (en) A kind of vehicle checking method based on deep learning
CN106295564A (en) The action identification method that a kind of neighborhood Gaussian structures and video features merge
CN109598220A (en) A kind of demographic method based on the polynary multiple dimensioned convolution of input
CN105046659A (en) Sparse representation-based single lens calculation imaging PSF estimation method
CN111931722B (en) Correlated filtering tracking method combining color ratio characteristics
CN102236786B (en) Light adaptation human skin colour detection method
CN107194873A (en) Low-rank nuclear norm canonical facial image ultra-resolution method based on coupling dictionary learning
CN111178261A (en) Face detection acceleration method based on video coding technology
CN106372597A (en) CNN traffic detection method based on adaptive context information
CN106683084B (en) It is a kind of based in the ranks as the TDI image deformation degree method for objectively evaluating of bias estimation
CN112990102B (en) Improved Centernet complex environment target detection method
CN106846377A (en) A kind of target tracking algorism extracted based on color attribute and active features
CN111539987B (en) Occlusion detection system and method based on discrimination model
CN112329716A (en) Pedestrian age group identification method based on gait characteristics
CN111242839A (en) Image scaling and cutting method based on scale grade

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161116