CN108470354A - Video target tracking method, device and realization device - Google Patents
Video target tracking method, device and realization device Download PDFInfo
- Publication number
- CN108470354A CN108470354A CN201810249416.5A CN201810249416A CN108470354A CN 108470354 A CN108470354 A CN 108470354A CN 201810249416 A CN201810249416 A CN 201810249416A CN 108470354 A CN108470354 A CN 108470354A
- Authority
- CN
- China
- Prior art keywords
- target object
- characteristic point
- target
- feature
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of video target tracking method, device and realization devices;This method includes:In the image range of setting, the feature point set detected in present frame screens feature point set according to preset screening conditions;And then according to the feature point set after screening, Feature Points Matching, estimation and tracking status analysis are carried out to target object;According to matching result, motion estimation result and tracking status analysis result, the interframe movement parameter of the appearance features of the feature point set of target object and neighborhood background, target object and neighborhood background, target object and neighborhood background is updated, to update the tracking strategy of target object.Tracking result in the present invention can not only reflect the position of target object in time, the range and rotation angle of target object can also be accurately reflected, it can make the tracking of video frame target object that there is preferable robustness and robustness, computation complexity is relatively low simultaneously, realizes that tracking robustness and arithmetic speed are taken into account.
Description
Technical field
The present invention relates to video frequency object tracking technical field, more particularly, to a kind of video target tracking method, device and
Realization device.
Background technology
Motion tracking refers to being detected to interested target in continuous image sequence, to obtain the position of target
It sets, the information such as range, form, is the video of next step to set up the correspondence of target in continuous video sequence
Understand and analysis provides reliable data.Traditional tracking establishes model to target, when a new frame arrives, passes through
The optimal likelihood of object module is searched for track target, it is contemplated that algorithm complexity problem usually only returns to tracked target
Position, and the information such as target areas imaging in video, rotationally-varying are not returned, and be easy by mixed and disorderly background, screening
The influence of the factors such as gear, movement mutation causes tracking drift or even tracking failure;Therefore, existing track algorithm tradition tracking
Method perhaps has preferable effect in terms of the computation complexity, and sacrifices robustness to a certain extent, or highlights Shandong
Stick, and calculating speed is sacrificed, it is typically difficult to take into account.
Invention content
In view of this, the purpose of the present invention is to provide a kind of video target tracking method, device and realization devices, so that
The tracking of video frame target object has preferable robustness and robustness, while computation complexity is relatively low, realizes tracking robust
Property and arithmetic speed are taken into account.
In a first aspect, an embodiment of the present invention provides a kind of video target tracking methods, including:Initialize tracking parameter;
Tracking parameter includes at least location and range, interframe movement parameter, the target pair of target object and neighborhood background of target object
As the feature point set with neighborhood background;It is a variety of in the appearance features of target object and neighborhood background;In the image range of setting
Interior, the feature point set detected in present frame screens feature point set according to preset screening conditions;Feature point set includes
Characteristic point and the corresponding feature vector of characteristic point;According to the feature point set after screening respectively target object corresponding with former frame and
The feature point set of neighborhood background is matched;According to the characteristic point after screening, estimation is carried out to target object;According to screening
Characteristic point afterwards is at a distance from the center of target object and the appearance features of target object, to target pair in present frame
Elephant into line trace status analysis;According to matching result, motion estimation result and tracking status analysis as a result, to target object and
The interframe movement of the feature point set of neighborhood background, the appearance features of target object and neighborhood background, target object and neighborhood background
Parameter is updated, to update the tracking strategy of target object.
Second aspect, an embodiment of the present invention provides a kind of video frequency object tracking devices, including:Initialization module is used for
Initialize tracking parameter;Tracking parameter includes at least location and range, the interframe of target object and neighborhood background of target object
The feature point set of kinematic parameter, target object and neighborhood background;It is a variety of in the appearance features of target object and neighborhood background;Sieve
Modeling block, in the image range of setting, the feature point set in present frame being detected, according to preset screening conditions, to spy
Sign point set is screened;Feature point set includes characteristic point and the corresponding feature vector of characteristic point;Feature Points Matching module is used for root
According to the feature point set after screening, the feature point set of target object corresponding with former frame and neighborhood background is matched respectively;Movement
Estimation module, for according to the characteristic point after screening, estimation to be carried out to target object;Status analysis module is tracked, is used for
According to the characteristic point after screening at a distance from the center of target object and the appearance features of target object, to present frame
Middle target object into line trace status analysis;Update module, for according to matching result, motion estimation result and tracking situation
Analysis result, appearance features, target object to the feature point set of target object and neighborhood background, target object and neighborhood background
It is updated with the interframe movement parameter of neighborhood background, to update the tracking strategy of target object.
The third aspect, an embodiment of the present invention provides a kind of video frequency object tracking realization devices, including processor and machine
Readable storage medium storing program for executing, machine readable storage medium are stored with the machine-executable instruction that can be executed by processor, and processor is held
Row machine-executable instruction is to realize above-mentioned video target tracking method.
The embodiment of the present invention brings following advantageous effect:
A kind of video target tracking method, device and realization device provided in an embodiment of the present invention initialize tracking parameter
Afterwards, in the image range of setting, detect the feature point set in present frame, according to preset screening conditions, to feature point set into
Row screening;Further according to the feature point set after screening respectively the feature point set of target object corresponding with former frame and neighborhood background into
Row matching;And then according to the characteristic point after screening, estimation is carried out to target object, according to the characteristic point and target after screening
The distance of the center of object and the appearance features of target object, in present frame target object into line trace situation
Analysis;Finally according to matching result, motion estimation result and tracking status analysis as a result, the spy to target object and neighborhood background
Sign point set, the appearance features of target object and neighborhood background, the interframe movement parameter of target object and neighborhood background are updated,
To update the tracking strategy of target object;In which, tracking result can not only reflect the position of target object in time, also
The range and rotation angle that target object can be accurately reflected can make the tracking of video frame target object have preferable robust
Property and robustness, while computation complexity is relatively low, realizes taking into account for tracking robustness and arithmetic speed.
Other features and advantages of the present invention will illustrate in the following description, alternatively, Partial Feature and advantage can be with
Deduce from specification or unambiguously determine, or by implement the present invention above-mentioned technology it can be learnt that.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, better embodiment cited below particularly, and match
Appended attached drawing is closed, is described in detail below.
Description of the drawings
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art
Embodiment or attached drawing needed to be used in the description of the prior art are briefly described, it should be apparent that, in being described below
Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor
It puts, other drawings may also be obtained based on these drawings.
Fig. 1 is the flow chart of the algorithm of video frequency object tracking provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of video target tracking method provided in an embodiment of the present invention;
Fig. 3 is the flow chart of initialization tracking parameter provided in an embodiment of the present invention;
Fig. 4 is the feature point set provided in an embodiment of the present invention according to after screening target object corresponding with former frame respectively
Matched flow chart is carried out with the feature point set of neighborhood background;
Fig. 5 is the flow chart into line trace status analysis provided in an embodiment of the present invention to target object in present frame;
Fig. 6 be it is provided in an embodiment of the present invention to Feature Points Matching situation into the schematic diagram of line trace status analysis;
Fig. 7 is the schematic diagram of the process of target following provided in an embodiment of the present invention positioning;
Fig. 8 is the feature point set, target object and neighborhood provided in an embodiment of the present invention to target object and neighborhood background
The schematic diagram that the interframe movement parameter of the appearance features of background, target object and neighborhood background is updated;
Fig. 9 is the flow that the feature point set provided in an embodiment of the present invention to target object and neighborhood background is updated
Figure;
Figure 10 is the flow chart of another video target tracking method provided in an embodiment of the present invention;
Figure 11 is a kind of structural schematic diagram of video frequency object tracking device provided in an embodiment of the present invention;
Figure 12 is a kind of video frequency object tracking realization device structural schematic diagram provided in an embodiment of the present invention.
Specific implementation mode
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention
Technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than
Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise
Lower obtained every other embodiment, shall fall within the protection scope of the present invention.
The flow chart of algorithm shown in Figure 1 for video frequency object tracking;After object initialization, it is initial to obtain target
State X0, and initialized target apparent model A0, into tracking phase.In video frame ItAfter arrival, the dbjective state before foundation
And object module, target is positioned in the current frame, obtains the state X of target in the current framet, according to mesh in present frame
Target appearance features are to apparent model AtIt is updated.In general, can inevitably block during tracking, track drift, therefore
It realizes the tracking of robust, current tracking mode should be analyzed, and accordingly adjust tracking strategy.In addition, will be in complexity
Steady, robust tracking is realized in scene, it is often necessary to be blended using various features and establish characteristic model, therefore multiple features melt
Conjunction problem is also usually robust tracking algorithm problem needed to be considered.
Typical Target Tracking System includes mainly following three step:
(1) object module is established, regardless of tracking strategy, track algorithm is required for establishing the table that target is described
Model is seen, and finds the position of target in the current frame according to object module.
(2) target location algorithm, according to the object module of foundation in picture frame the states such as the position of searching target, range
Information, according to the difference to track algorithm thinking, the solution of video frequency object tracking can generally speaking be divided into the calculation of randomness
Tracking problem is regarded as the observation data and state of the target before known by method and deterministic algorithm, the method for randomness
Under, estimate the optimum state of target in present frame.And tracking is reduced to solve asking for optimal cost function by deterministic method
Topic.
(3) object module update method, track algorithm are by believing the observed result of present frame characteristic information and feature
The priori (i.e. object module) of breath is compared analysis and obtains the tracking result of present frame.However in actual tracking process
In, the appearance features of tracked target are not unalterable, and the apparent variation of target can be divided into two kinds of situations:Due to illumination
The factors such as variation, deformation, on-plane surface rotation and cause the apparent of target in picture frame to be changed really, at this point, target
Apparent model should adapt in time, follow such variation;Another situation be due to blocking, object table caused by the factors such as noise
The variation of sight, at this moment apparent model should not follow the variation of present frame.It can be seen that the update that both of these case corresponds to apparent model is wanted
Seeking Truth is completely different, thus how processing target appearance features variation when carry out robust target following significant challenge.
The method for scanning for positioning to target can be divided into the algorithm of randomness and deterministic algorithm.The algorithm of randomness
Target Tracking Problem is converted to the optimal State Estimation problem under Bayesian frame, state is target following as a result, packet
Include the parameters such as the position range of target in the current frame.Randomness track algorithm is divided into prediction and observation vector updates two steps
Suddenly, under conditions of the priori of the targets such as known target expression and original state, according to target movement model to target
Current state is predicted, is then estimated to obtain the best of target by observing the maximum a posteriori probability of data solution dbjective state
Meter, classical randomness track algorithm have Kalman filtering (Kalman filter) and particle filter (Particle filter)
And its innovatory algorithm.
Deterministic algorithm carries out similarity measurement by measurement to present frame candidate target region and known target model
It is tracked to realize, and similarity measurement is carried out often through matching algorithm, such as:Mean-shift algorithms utilize printenv probability
The gradient of density on the basis of former frame target location, is searched and color of object inner core density in its neighborhood in the current frame
Estimate position of the most similar image-region as present frame target.Mean-shift and Cam-shift algorithms are exactly according to this
Thinking is planted to carry out target following.To improve the robustness of tracking, it usually needs pre-processed to picture frame sequence, improve figure
Image quality amount, and establish and update object module.
No matter which kind of target positioning strategy is required for establishing object module and be searched in the current frame according to object module
Seek the Optimum Matching of target.Therefore establish apparent to the target model being described be determine track algorithm robustness it is important because
Element, and the matter of utmost importance for establishing the apparent modeling of target is to choose the appearance features that can effectively describe target, foundation establishes object table
The characteristics of image used in model is seen, it is several under being divided into the method for establishing apparent model:
(1) appearance features based on pixel value description:Directly using pixel value come establish target signature can be divided into based on to
The method of amount and method based on matrix, the method based on vector directly converts image-region to a high dimension vector, and is based on
The method of matrix is typically directly to establish target signature using two-dimensional matrix.After such methods establish the appearance features of target, lead to
It crosses and calculates the correlation between current frame image region and target template to track target, the tracking knot in current frame image is used in combination
Fruit updates target signature.
(2) appearance features based on optical flow method description:Optical flow method is close by the space-time displacement of each pixel of object region
Field is spent as clarification of objective, usually has and two class optical flow computation methods are constrained based on brightness constancy constraint and non-brightness constancy,
Optical flow computation method hypothesis illumination condition based on brightness constancy constraint is constant in regional area, obtains figure on this basis
As the field information of each pixel displacement vector in region.The method of non-brightness constancy constraint is then by introducing pixel spatially
Hereafter relationship to carry out geometrical constraint to optical flow field.In general, the computation complexity of optical flow method is higher.
(3) appearance features based on probability density description:Image histogram is most common gray probability distribution description side
Method, such as Mean-shift and Camshift track algorithms, it is at present in target tracking algorism to establish target signature using histogram
In most common method.
(4) appearance features based on covariance description:The object module established based on covariance can describe target internal
The correlation of each section.
(5) appearance features based on profile description:It is described with the Confined outline curve on target object boundary tracked
Target establishes the appearance features of target;And with the scaling of target, rotation, deformation, contour feature can be adaptively continuous
Update, thus suitable for non-rigid targets into the occasion of line trace.
(6) appearance features based on local feature description:Target is described just with some local characteristics of target, than
Such as point, line or the regional area of some signatures of target, object module is established by local feature and is examined with present frame
The local feature measured is matched to target into line trace, in this way, even if target is at least partially obscured, as long as there is part local
Characteristic point still can be detected, so that it may to realize efficient target tracking.The common local feature in object tracking process
There are corner feature (such as Harris angle points), Gabor characteristic, SIFT (Scale Invariant Feature Transform, ruler
Spend invariant features transformation) feature or SURF (Speeded Up Robust Features accelerate robust features) feature etc..
(7) appearance features based on compressed sensing:Target following can be regarded as based on a dynamic construction and newer
Sample set finds the problem of tracking target sparse indicates.According to target sample collection using norm minimum method to tracking target into
Row rarefaction representation, and tracking target is assessed based on the rarefaction representation of sample under Kalman filter frame.Target with
Track can also regard the sparse Approximation Problem under particle filter frame as, and sparse table is carried out to target with regularization least square method
Show, indicates that the candidate target of error minimum is to track target with target sparse in new picture frame.
During actual tracking, due to blocking, factors such as distance change between noise, illumination variation, target and detector
Influence, the appearance features of tracked target are not unalterable.The current apparent newer algorithm of online adaptive can divide
Two classes:The method of production (Generative) method and deterministic (Discriminative);The algorithm of production is only right
Target is apparent to be modeled, and does not consider the separating capacity that object module sees background and other object tables, such methods
A target apparent model is initially set up, target is scanned for tracking by seeking maximum likelihood or maximum a posteriori probability;Sentence
Certainly the algorithm of formula by target following as target detection problems:It is i.e. by on-line training and newer grader that target is adjacent from it
It is separated in the regional area of domain background, in initial frame, user determines target first, and obtains description target therefrom
The feature set of feature set and description target neighborhood background, in successive frame, by the grader of two-value by target from background point
It separates out and, for the variation for coping with apparent, grader needs to carry out in due course update.
Existing track algorithm is always taken between robustness, tracking accuracy, robustness and computation complexity
House, specific disadvantage are as follows:
(1) tracking result usually only includes the position of target, does not include the range of target;Traditional track algorithm passes through
Object module is established, current goal position is obtained using the matched method of search in the current frame, it is contemplated that the application pair of tracking
The requirement of computation complexity, usual tracking result do not include the range of target, the rotation angle of leisure opinion target, because of tracking target
Position be only optimum search is carried out in two-dimensional image, to obtain the range or rotation angle of target, Optimum Matching
Search space will expand into the three-dimensional or even four-dimension, will greatly increase computational complexity, however in many application scenarios, accurately know
The range and rotation angle of target will have great importance to further handling.
(2) need to be further increased to blocking, tracking the robustness tracked when drift, complex background;Traditional
Tracking is to target occlusion, even partial occlusion is also very sensitive;It is accurate further for tracking drift, tracking loss shortage
Analysis and judgement, be easy background information is introduced into the trace model of target, therefore also be difficult to timely processing tracking process
In abnormal conditions, cause tracking fail.
(3) computation complexity is the key factor of track algorithm always, it is difficult to take into account the performance of various aspects;Outstanding tracking
Device or method for tracking target should take into account robustness, robustness and computation complexity, be to need multiple links mutually matched complete
System, and traditional tracking perhaps has preferable effect in terms of the computation complexity at present, and sacrifice to a certain extent
Robustness, or robustness is highlighted, and calculating speed is sacrificed, it is typically difficult to take into account.
Based on this, an embodiment of the present invention provides a kind of video target tracking method, device and realization devices;The technology can
With in the object tracking process of application continuous videos interframe;Relevant software or hardware realization may be used in the technology, leads to below
Embodiment is crossed to be described.
A kind of flow chart of video target tracking method shown in Figure 2;This method comprises the following steps:
Step S202 initializes tracking parameter;Tracking parameter includes at least the location and range of target object, target object
With the interframe movement parameter of neighborhood background, the feature point set of target object and neighborhood background;The table of target object and neighborhood background
It sees a variety of in feature;The target object is referred to as target.
Step S202 can specifically be accomplished in the following manner:(1) it extracts in present frame, target object and neighborhood background
Appearance features;Appearance features include at least feature description subvector, scale factor characteristic information, color characteristic, textural characteristics
With it is a variety of in edge feature;(2) length and width of the center and target rectangle frame of target object is determined;(3) by mesh
The difference of correspondent transform parameter of the interframe movement parameter initialization of mark object and neighborhood background between present frame and former frame;
(4) feature point set of target object is initialized as in the rectangle frame of target object, the feature point set detected;By neighborhood background
The adjacent domain that is initialized as in the preset range other than target object of feature point set in the feature of neighborhood background that detects
Point set;(5) appearance features of target object and neighborhood background are initialized as to the feature vector for the appearance features extracted.
In view of the essence of video frequency object tracking is to search out to come from its neighborhood background region by target in each frame,
To which target be accurately positioned, target is distinguished typically from its neighborhood background region according to target and its neighborhood background table
The difference of feature is seen, therefore model only is established to target and as the foundation of tracking, it is difficult to obtain the steady tracking of robust and tie
Fruit.Therefore, the model of foundation includes the model of object module and its neighborhood background.
In the video frame, background is caused by the movement of interframe is due to the motion change of detector itself, and target exists
The position of image interframe and the variation of range be movement due to target itself and detector motion it is common caused by, target
The characteristics of motion of laws of motion and neighborhood background simultaneously differs, and correspondingly, the characteristic point in target or neighborhood background region is in frame
Between change in location actually reflect the interframe movement of characteristic point.In practical applications no matter target or background, interframe
Movement will not mutate, and especially for the application under high frame frequency sampling condition, the variation of location between frames always has continuously
Property.Therefore, the interframe variation of corresponding characteristic point position should also have corresponding continuity, will not mutate.Observation is even
The interframe displacement of characteristic point (x, y) in continuous video frame, moves the process changed over time and is equally represented by
{Ui(1),…,Ui(t) }=(ui(x,y,t0):1≤t0≤t) (1)
ui(x,y,t0) it is t0The interframe displacement that characteristic point i of the moment at (x, y) is observed, within a period of time,
The movement of this interframe can be superimposed with certain Gaussian noise to simulate with movement at the uniform velocity.Therefore we equally use
Single Gaussian distribution model describes this " characteristic point interframe displacement process ", that is, uses Gaussian Profile N (μu,σu) simulate { Ui
(1),…,Ui(t)}。
On the other hand, target and its neighborhood background are typically to have the appearance features such as different colors, texture, edge, because
This its corresponding apparent model is not answered identical yet.Similarly, since noise, illumination variation, target and the movement of detector, background change
The influence of factors such as become, even if if different images frame that entire scene, which is static same detector, to be acquired in different moments not
It can be just the same.Thus in video, even the characteristic point stablized, the image letter in the regional area of its position (x, y)
Breath, the information such as including gray value can also change at any time.If t at any time, its position characteristic point i is (x, y),
featurei(x,y,t0) it is t0The characteristic value that moment observes at characteristic point (x, y), observation occurs in this feature vertex neighborhood
" characteristic information process " (characteristic information change with time process), is represented by
{Feati(1),…,Feati(t) }=(featurei(x,y,t0):1≤t0≤t) (2)
The flow chart of initialization tracking parameter shown in Figure 3;The video of continuous interframe is always metastable, will not
It mutates.Even if in the case where blocking, the image-region of shelter has blocked the object being blocked, and shelter figure
As region be then it is metastable, the object being on the other hand blocked over time still it is possible that, observation before
To priori still have important role to the understanding for the object that is blocked.Accordingly based on SURF characteristic point Detection and Extraction
Various features vector, including SURF feature descriptions sub-information, scale factor information and other characteristic informations, in video at any time
Between the variation that occurs be relatively stable.This " characteristic information process " can be described with single Gaussian distribution model, that is, uses Gaussian Profile N
(μfeat,σfeat) simulate { Feati(1),…,Feati(t)}.Pass through the height of each characteristic point feature vector in target
This distributed model establishes the apparent model of target, similar, and the apparent model of target neighborhood background is then by being located at target neighbour
The Gaussian distribution model of each characteristic point feature vector of domain background area is constituted.
In Fig. 3, model initialization is carried out in first frame, includes the initialization of following parameter:
(1) target is indicated with rectangle frame, the position of initialized target and rangeWherein
Indicate the coordinate at target rectangle frame center,Indicate the height and width of target rectangle frame;Initialized target neighborhood background region
For withCenter, withThe region of target itself is removed for the rectangle frame of height and width.
(2) the interframe movement parameter of initialized target isIt does not translate, does not have
There is rotation, does not scale;The interframe movement parameter of initialized target neighborhood background is
It does not also translate, does not rotate, do not scale;Coefficient t indicates frame ordinal number, the t=0 in first frame;The frame of initialized target
Between movement Gauss model mean value beAfter the arrival of the second frame, by Gauss model
Variance isIt is initialized as the difference for the transformation parameter that first frame is detected with the second frame;Initial background
Interframe movement Gauss model mean value beAfter the arrival of the second frame, background is transported
The variance of dynamic Gauss model isFirst frame is initialized as with the transformation that the second frame detects to join
Several differences.
(3) initialized target and neighborhood background SURF feature point sets.WithCenter,To be high and
SURF characteristic points are detected in wide rectangular area, obtain SURF feature point sets Pg0, by the feature point set in target rectangle frameIt is initialized as target signature point set, and the feature point set in target neighborhood background regionBackground characteristics point set,
In have:
(4) apparent model of initialized target apparent model and neighborhood background region.To each characteristic point, according to SURF spies
Sign point detection algorithm, the corresponding feature description subvector of ith feature point at coordinate (x, y) is extracted in t framesAlso obtain its corresponding scale factor characteristic information simultaneouslyAccording to tracking object and answer
With the difference of occasion, the vectors such as texture, gradient, the gray average in feature vertex neighborhood are also can detectThink
Each feature vector selected by each SURF characteristic points meets Gaussian Profile with the variation of video frame, the t=0 in first frame,
The mean value of its corresponding Gaussian component of initialization feature characteristic pointFor the observation of this feature point
Value
At the first frame moment, to the corresponding Gauss model variance of each feature vectorIt is initialized as one
Larger initial value;Therefore tracking starts, and after initialization, the model of foundation includes:(1) position of target and external square
Shape frame, uses parameterDescription;(2) motion model, target move Gauss model, use parameterIt is described, and the Gauss model of background motion, use parameterDescription;(3) the feature point set Pg detected0Belong to target and neighborhood
Background, clarification of objective point set are combined intoThe feature point set of background is combined into(4) the corresponding feature vector of each characteristic point
ForThe corresponding Gauss model parameter of feature vector is
Step S204 detects the feature point set in present frame, according to preset screening item in the image range of setting
Part screens feature point set;Feature point set includes characteristic point and the corresponding feature vector of characteristic point;
Above-mentioned steps S204 can specifically be accomplished in the following manner:(1) the image rectangle of image to be detected range is determined
The top left co-ordinate and bottom right angular coordinate of frame;(2) in image rectangle frame, characteristic point detection is carried out, the coordinate of characteristic point is obtained;
(3) mark and the corresponding feature vector of characteristic point of the Hessian matrixes of characteristic point are calculated;Feature vector includes feature description
Subvector, scale factor characteristic information and color, texture and marginal vectors;(4) according to following screening conditions, to characteristic point
The characteristic point of concentration is screened:The Hessian squares of the mark of the Hessian matrixes of characteristic point and characteristic point in former frame video frame
Battle array mark jack per line;Characteristic point is less than preset distance threshold at a distance from characteristic point in former frame video frame;Characteristic point and former frame
The Euclidean distance of characteristic point in video frame, character pair vector meets preset feature vector threshold value;Characteristic point is regarded with former frame
The shift length, direction of displacement and relative position relation of characteristic point meet preset displacement consistency threshold value in frequency frame;Work as feature
When point and the characteristic point of former frame video frame are multiple matching relationships to one, screening Euclidean distance is most from multiple characteristic points
Small characteristic point.
Step S206, according to the spy of the feature point set after screening corresponding with former frame target object and neighborhood background respectively
Sign point set is matched;
The problem of in view of computation complexity, does not usually carry out characteristic point detection and matching, in every frame to full images
Position and the range of target are determined by the frame matching of characteristic point.Come to current in conjunction with the target movement model established before
The target locating situation of frame is assessed, and characteristic point detection is carried out in new frame image to determine according to assessment result
And matched local area.On the basis of assessing tracking accuracy, according to following formula according to present frame
Tracking result determines that next frame carries out the image range of characteristic point detection:
In formulaFor target in present frame center position coordinates and it is high and
Width, thrdU are threshold constant, and usual value is 2.4-3, and (LTx, LTy) and (RBx, RBy) is respectively represented carries out spy in next frame
The coordinate of the coordinate and the lower right corner in the upper left corner of the image rectangle frame of sign point detection.
SURF characteristic point detections are carried out in the rectangular image block determined by coordinate (LTx, LTy) and (RBx, RBy), and are counted
Calculate the coordinate (x of detected characteristic point in the picturei,yi), the mark of the Hessian matrixes of characteristic point is calculated, each feature is calculated
The corresponding feature vector of point
Belong to clarification of objective point and belong to neighborhood background region characteristic point its obeyed the characteristics of motion, its color,
The appearance features such as shape are different from, therefore by feature point set Pgt-1It is divided into two classes:Feature point set positioned at target areaFeature point set positioned at background areaIt is detecting to work as
The feature point set of previous frameAfterwards, respectively with target signature point setWith background characteristics point set
It is matched, TN (t-1) is the number of t-1 moment target signature point sets here, and BN (t-1) is t-1 moment background characteristics point sets
Number.Matching result between feature point set can indicate with a binary set in pairing space, Matched={ 0,1
}M, each entrance matched in vector M atchedijRepresent a pairing response, matchedij=1 indicates successful matching, otherwise
Indicate that front and back frame characteristic point i and j pairings fail, M indicates that the pairing space of front and back frame feature point set composition, M can use a Two-Dimensional Moment
The size of battle array description, matrix is that N (t-1) × N (t), N (t-1) and N (t) indicate that front and back frame participates in the characteristic point of pairing respectively
Number.Or some characteristic point successful matching of characteristic point in former frame and present frame or not matching any spy
Point is levied, i.e. pairing should meet constraints Rstr:
It is shown in Figure 4 according to the feature point set after screening target object corresponding with former frame and neighborhood background respectively
Feature point set carry out matched flow chart, specifically include following steps:
(1) matching based on Hessian traces of a matrix;SURF characteristic points are the Local Extremums in image, according to different
Extreme cases, SURF characteristic points can be divided into two classes, i.e. feature dot center gray value is gray scale minimum and maximum two in neighborhood
Kind situation, it is clear that should not be matched between this two category features point.By the mark for calculating the Hessian matrixes of SURF characteristic points
(i.e. the sum of Hessian matrix diagonals element) can determine whether that its center gray scale is local maximum or minimum, and the mark of characteristic point is enabled to use
Trace is indicated, if the mark of Hessian matrixes is just, shows that domain pixel intensity is big near characteristic point center brightness;If Hessian
The mark of matrix is negative, then shows that domain pixel intensity is dark near characteristic point center brightness.Compare in matching space M two it is to be matched
The Hessian traces of a matrix of characteristic point i and j, only they are jack per lines, just think the point to be matched to that may match, i.e.,
matchedij=1, and as preliminary candidate matches set of characteristic points candidate_matchpair0.
(2) matching of feature based point displacement size constraint;Due to thinking that the interframe movement of characteristic point will not mutate,
The present frame characteristic point j that can be matched with previous frame characteristic point i must surpass in a certain range centered on by characteristic point i
Go out the range characteristic point be not present with the matched possibility of i, i.e., in candidate matches feature point set candidate_matchpair0
It is middle to reject distance Dist of the interframe characteristic point to (i, j)mijMore than defined threshold thre σmMatching, to obtain new candidate
With feature point set candidate_matchpair1, which can be indicated with following formula.
(3) matching of feature based vector constraint;Calculate separately the target characteristic point set of previous frameAnd background characteristics point setThe feature detected with present frame
Point setBetween feature vectorBetween
DistanceAccording to the characteristic point apparent model established, distance between more each feature vector with it is right
Answer characteristic model varianceIf distanceRespectively less than correspond to threshold value, then it is assumed that matching,
Set pairing response matchedijIt is 1, otherwise it is assumed that mismatching, sets pairing response matchedijIt is 0.
matchedij=match_d&match_s&match_o (12)
New candidate is further selected in candidate_matchpair1 candidates match set of characteristic points with this
With feature point set candidate_matchpair2, thre σ are threshold value here, are usually arranged as 2.4-3.
(4) matching of feature based point displacement consistency constraint;The movement of characteristic point in target area in interframe
To be changed in location between frames due to target, equally, positioned at background area characteristic point the displacement of interframe be due to visit
Caused by the movement for surveying device, it is consequently belonging to clarification of objective point setIt should expire in the change in location of interframe
The same kinematic constraint of foot similarly belongs to the feature point set of backgroundAlso it should meet same fortune
Moving constraint.Such kinematic constraint is summarized as three conditions by we:The interframe displacement of same class characteristic point should have similar
Displacement, the interframe displacement vector length of the characteristic point that can correctly match are answered with uniformity;The frame of same class characteristic point
Between displacement should have similar direction of displacement, the interframe displacement vector orientation of the characteristic point that can correctly match that should also have consistent
Property;In most cases, before and after the characteristic point interframe displacement that can correctly match, mutual position relationship should substantially be kept not
Become.
The thought for borrowing RANSAC algorithms selects 3 conditions of satisfaction or more in set candidate_matchpair2
Pairing characteristic point, which can be divided into three steps:(1) the optional two couples interframe pairing characteristic point (i for meeting certain condition1,j1) and
(i2,j2) estimation lumped parameter.Characteristic point i1And i2For the characteristic point of previous frame, and j1And j2For present frame characteristic point, frame is calculated
Meta position amountLength | a |=| i1,j1|, and vectorLength b |=| i2,j2|, and calculate
VectorWith vectorBetween angle ∠ θab;Calculate vector in frameLength | c |=| i1,i2|, calculation amountLength | d |=| j1,j2|.Calculate interframe vector length | a | with | b | mean valueAnd varianceCalculate vector length in frame | c | with | d | mean valueAnd varianceWith variance with
The ratio of mean valueWithDifferent candidate matches characteristic points is characterized in interframe and frame
Interior vector length situation of change, since movement will not be mutated, the movement of characteristic point should obey its said target or background area
Whole movement, therefore the two ratios shall not be too big, and ∠ θabAlso should not be too big, if interframe key point displacement variance
It is less than 0.24 with average ratio Par1, and key point displacement variance and be less than 0.2 with average ratio Par2 in frame, and two pairs of features
Point (i1,j1) and (i2,j2) between angle ∠ θabWhen less than radian value 0.15, then withAnd the phase of vector a
The mean value at the phase angle of parallactic angle and bAs model parameter, continue in next step, otherwise selected characteristic point pair again;(2) with estimating
Count obtained model parameterAndGiven threshold, set of computations candidate_matchpair2 are each
Candidate matches characteristic point is to (in,jn) interframe shift length | in,jn|, directionIt calculates in former frame between characteristic point
VectorWith vectorThe mean value of lengthIt is vectorial between characteristic point in present frameWith vectorThe mean value of lengthVector length between characteristic point in calculating frame
The variance of degreeIfIt is less than
Less than 0.1, andLess than 0.3, then it is assumed that characteristic point is to (in,jn) point in position, it is otherwise exterior point.Find out set
Interior point in candidate_matchpair2, and record corresponding interior number.(3) the most estimation of interior points is found out, if
The ratio that most interior points account for set pairing sum is more than defined threshold more than the number put in threshold value, or abbreviation, then this is estimated
Otherwise the interior point judged under meter repeats the above step as new candidate pairing feature point set candidate_matchpair3
Suddenly.
(5) matching of feature based point pairing unique constraints;In new candidate pairing feature point set candidate_
In matchpair3, there may be the case where multiple Feature Points Matchings same characteristic point, it is clear that be incorrect, detection collection
Close candidate_matchpair3 in it is all be unsatisfactory for correspond constraint characteristic point pairing relationships, delete appearance features to
Non-minimum fusion distance Dist between amountintergralijPairing relationship, only retain the matching relationship conduct of those of fusion distance minimum
Pairing is as a result, as shown in figure 4, further obtain new pairing relationship set candidate_matchpair4.Wherein merge away from
From DistintergralijBy the distance between all kinds of feature vectorsWeighted Fusion and obtain:
weightnFor the normalization blending weight of n-th characteristic information, n ∈ { d, s, o } be with above-mentioned several features it
One,The distance between the feature vector selected according to the actual conditions of video is represented, feature is calculated by on-line study mode
VectorThe variance changed over timeDefine blending weight
For:
Step S208 carries out estimation according to the characteristic point after screening to target object;
The tracing area of target is usually indicated with rectangle frame, if in previous frame, the center of target rectangle frame is xct-1=
(center_xt-1,center_yt-1), ht-1And wt-1The width and height of expression.The location between frames in target and its neighborhood background region become
Change and be considered as translation along the horizontal or vertical direction, around the superposition of scaling and rotation that geometric center is origin, can use
Transformation parameter(target) or(neighborhood background) is described, here ut=
(uxt,uyt) it is translation parameters, ρtFor zooming parameter, and θtFor rotation parameter, then the transformation equation of target area interframe is:
Ideally, the characteristic point in target should follow target to do the movement consistent with its.If the characteristic point at t-1 momentIt is with t moment positionFeature Points Matching, characteristic point is calculated by formula (15)In the position estimation value of t momentIt should be with characteristic pointEqually, however in practice due to making an uproar
The influence of the factors such as sound and observation angle variation, estimated valueWith observationIt is not fully consistent.ObservationIt is considered as
It is estimated valueIt has been superimposed obtained from Gaussian noise.In the pairing relationship set candidate_ for obtaining front and back frame characteristic point
After matchpair4, according to estimated value of the previous frame feature point set in current frame imageAnd phase therewith
The observation of matched present frame characteristic pointDefining observation error is:
The equation of motion parameter for meeting observation error minimum is sought with the method for non-linear least square curve matchingWithHere weightsIt is determined by the robustness of characteristic point, robustness is good
Characteristic point assign larger weights.
Step S210, according to the characteristic point after screening at a distance from the center of target object and target object
Appearance features, in present frame target object into line trace status analysis;
Step S210, can specifically be accomplished in the following manner:(1) according to the center of characteristic point and target object
Distance, detection by mistake classification characteristic point, reject by mistake classification characteristic point, generate fisrt feature point set;(2) basis
Fisrt feature point concentrates the appearance features of each characteristic point, analyzes whether target object in current video frame occurs tracking drift.
During tracking target object, inevitably will appear tracking drift, block (including partial occlusion and completely hide
Gear) and the case where losing is tracked, it to realize that the tracking of robust should then analyze current tracking result, judge that it is
No tracking is accurate, and situations such as drift still has occurred, blocks, lose, and adjustment tracking strategy just can guarantee steady robust in time
Tracking.
The flow chart into line trace status analysis to target object in present frame shown in Figure 5;It is current obtaining
The pairing relationship set candidate_matchpair4 of frame and previous frame characteristic point, and respectively mesh is estimated with least square method
The interframe movement parameter in mark and its neighborhood background regionWithAfterwards, it needs pair
Whether whether generation tracking drift is analyzed, analyzed later according to the case where Feature Points Matching and normally track or occur
It blocks, track the case where loss.
Since tracking lose typically tracking and drifting about, therefore whether accurate judgement, which tracking drift occurs, is tracked to raising
Device performance has important meaning.The characteristic point that the embodiment of the present invention detects present frameCollection difference
With target signature point setAnd the feature point set of background areaIt carries out
Matching, searches out pairing relationship set candidate_matchpair4 by plural serial stage, multifactor control, estimates respectively accordingly
Calculate target and its interframe movement parameter of neighborhood backgroundWithPass through mesh
Target interframe movement parameter calculates the corresponding rectangle frame range of present frame target, and the characteristic point that present frame detects is located at target
Target feature point is classified in rectangle frameAnd the characteristic point positioned at target rectangle outer frame is classified as background characteristics point
However, in practical application, it is easy to be sorted out by mistake in the characteristic point that target surrounding and adjacent background area occur, if this
The characteristic point for belonging to background is classified as target by mistake, in subsequent frame Feature Points Matching, is matched again by the characteristic point of mistake point
Success, or even can it continuously be matched success in interframe, it will participate in the calculating of motion model parameters, it will cause follow-up
Tracking drift occurs when frame tracks, or even tracking is lost.Additionally due to noise, similar local image characteristics also be easy to cause with
Track drifts about.
In practical applications, tracked target is the shape of rigid body or target will not mutate in interframe.Therefore,
The relative position of background characteristics point in the background will not mutate in interframe, relative position of the target feature point in target
It will not mutate, especially for rigid-object, relative position of the target feature point in target changes with regard to smaller.
If interframe mutation will not occur for the relative position of target feature point to its geometric center, detect on this basis
The characteristic point of mistake classification.The distance of the width and high normalization target feature point to its geometric center of using target rectangle frame first is made
It is characterized relative position a little, when calculating t frames on this basis, coordinate isCharacteristic point i relative positionAnd it is compared with the relative position of previous frame this feature pointIf variation
More than 0.25, then it is assumed that this feature point belongs to is classified by mistake, and therefore causes tracking drift, then from target characteristic point setMiddle rejecting this feature point, pairing relationship set candidate_matchpair4 is updated to
Candidate_matchpair5, and re-evaluate target interframe kinematic parameter
Since there are the influences of the factor of similar local appearance features in noise and image space, it is possible to causing
Feature Points Matching mistake is drifted about so as to cause tracking.It drifts about to this kind of tracking, it is assumed that the apparent information of target is in interframe will not
It mutates.If tracking drift occurs, it is its neighborhood background in fact to have part in the target zone detected, within this range institute
The apparent information of extraction can incorporate the information of background, the priori of the appearance features extracted within this range and target appearance features
The case where knowledge compares, the appearance features that must be extracted in the case of accurate with tracking can be sent out with larger difference
The mutation of raw apparent information.
According to the target interframe kinematic parameter estimated beforeAnd previous frame target rectangle frame
Four vertex position, calculate the current rectangular area for indicating target, extract the appearance features vector in this rectangular area,
The appearance features vector is compared with its historical experience, judges whether to be mutated, and then judge whether to be drifted about,
It is converted into the problem of seeking likelihood probability by whether the tracking for assessing present frame has occurred drift.The target movement currently estimated
Parameter is the appearance features vector by comparing interframeAnd etc. realize,
Therefore by analyzing these appearance features vector to determine whether tracking is accurate and insecure, it usually needs in addition choose apparent
Feature vector, the change for analyzing its interframe are turned to judge to track accurate foundation.
However for track algorithm, the robustness of algorithm is on the one hand improved, on the other hand needs the meter for ensureing algorithm
Calculate efficiency.Compressive sensing theory thinks that signal can be projected to some suitable transform domain obtains sparse transformation coefficient, so
The useful observation being hidden in sparse signal is obtained by designing an efficient observing matrix afterwards, passes through a small amount of useful sight
Measured value can be associated with signal, and it is effective that corresponding video tracking problem concerns that feature vector differentiates target following
Property, therefore target signature is transformed to by limited observation by observing matrix, i.e. compression is vectorial, directly utilizes the pressure after dimensionality reduction
Target is described in contracting vector, obtains the appearance features of target, and compressive sensing theory itself ensure that and can pass through
A small amount of compression vector almost nondestructively saves the information of original signal, can greatly reduce the computation complexity of algorithm.According to
Sparse theory extracts higher-dimension Haar-like feature vectors to candidate target regionSignal x in this way is in orthogonal transformation
Under the vectors of K sparse transformation coefficients can be obtained, can directly use the gaussian random calculation matrix for meeting constraint isometryMeasurement is compressed to it, is obtained compression and is measured vectorN=10 can be set6, K=10, pressure
Contracting measures vector dimension m=50.It can thus be appreciated that compression measure the i-th row vector that i-th of element in vector y is calculation matrix and
The inner product of Haar-like feature vectors, i.e.,:
Behind the target location determined by SURF Feature Points Matchings in present frame and range, radius is less than near this position
In the neighborhood of α, as center sampling and an equal amount of image block of target rectangle frame as positive sample, α=3 can be set,
Radius is less than in contiguous ranges of the β more than ξ near present frame target location, as center stochastical sampling 60 and target rectangle
An equal amount of image block of frame is as negative sample, ξ here<β, β could be provided as the length of rectangle frame, ξ=6, in positive negative sample
Extraction compression measures vector y in representative image block, under the conditions of tracking is accurate, is calculated with EM algorithms and updates positive and negative sample
This compression measures the (μ of vector y1,σ1) and (μ0,σ0).Wherein:μ1, σ 1 and μ0,σ0Respectively real goal and candidate background sample
Mean value and standard deviation.
The problem of whether candidate region is target, be considered as one two classification the problem of, result v ∈ { 0,1 }, p (v
=1) and p (v=0) indicates that candidate region is target and non-targeted probability respectively, and its probability is all 0.5.Think condition
Distribution p (yi| v=1) Gaussian distributedAnd condition distribution p (yi| v=0) then Gaussian distributed
After obtaining m positive negative samples, the score value of sample can be calculated:
Since target appearance features are in the mutation that interframe will not occur, interframe mutation will not occur for corresponding score value,
Therefore the variation of score value similarly meets Gaussian ProfileEM algorithms more fresh target after every frame tracks is used in combination
The mean value and variance of scoringUsing the current tracking result according to SURF Feature Points Matchings as sample to be judged, according to
Calculate the score value H for currently tracing into image rectangle frameT(y), it and to target following state judges:
Drift ∈ (0,1) 1 and 0 indicate whether there is tracking drift, thred σ respectively hereTFor pre-defined threshold value
Constant, thred σTIt could be provided as 2.4-3.
Present frame arrival before, it is known that feature point set be combined into Pgt-1, include target signature point setAnd target
Neighborhood background feature point setRespectively at the feature point set PgD detected in present frametIt is matched, there is Partial Feature point energy
It is enough matched, the target feature point matched isThe background characteristics point matched isSeparately
Have part could not matched characteristic point, be expressed as failing matched target signature point setWith fail to match
Background characteristics point set
It is shown in Figure 6 to Feature Points Matching situation into the schematic diagram of line trace status analysis;It is matched by analyzing
Feature point setWithSpace distribution situation preliminary point can be carried out to current tracking situation
Analysis:Shown in (a) in Fig. 6, feature point setWithIt is not sky, and is respectively positioned on respective
Region, for normal tracking;In such as (b), feature point setWithIt is not sky, but has part
Matched background characteristics point setMiddle existing characteristics point is located at target area in present frame, and at this moment target exists
The possibility being at least partially obscured.In such as (c), feature point setFor sky, but feature point setNo
For sky, i.e., does not belong to clarification of objective point and be matched success, such case usually corresponds to tracking loss or target is complete
It blocks entirely;In such as (d), feature point setWithIt is sky, i.e., without spy in previous frame
Sign point is matched, and such case corresponds to tracking and loses.
The above process is referred to as the process of target following positioning;As shown in fig. 7, the frame matching meter of SURF characteristic points
The interframe displacement parameter for calculating target and its neighborhood background, calculates the target of interframe, after the arrival of t frames, is first depending on mesh
The relevant historical knowledge of movement is marked, determines the region that target is likely to occur in a new frame, detects SURF features in the region
Point, the SURF characteristic points detected respectively with the target signature point set of previous frameWith background characteristics point setIt is matched,
It, can be with also for the matching for avoiding mistake as far as possible to ensure to search out interframe correctly matched characteristic point pair as much as possible
Using a variety of constraints series systems, wrong matching is gradually rejected in set of characteristic points from candidate match, is finally obtained just
True pairing;Specifically, will not can be mutated according to the displacement of characteristic point interframe respectively, characteristic point appearance features will not be mutated,
The interframe displacement for belonging to clarification of objective point answers target mass motion the constraintss such as to be consistent, find out present frame characteristic point with
Correct matching between the feature point set of previous frame estimates the interframe movement parameter of target according to matched characteristic point, to real
Existing target following.
Step S212, according to matching result, motion estimation result and tracking status analysis as a result, to target object and neighborhood
The interframe movement parameter of the feature point set of background, the appearance features of target object and neighborhood background, target object and neighborhood background
It is updated, to update the tracking strategy of target object.
It is shown in Figure 8 to the apparent of the feature point set of target object and neighborhood background, target object and neighborhood background
The schematic diagram that the interframe movement parameter of feature, target object and neighborhood background is updated;The embodiment of the present invention will detect
Characteristic point is divided into target signature point setWith background characteristics point setPass through the SURF Feature Points Matchings detected with present frame
Carry out target following.During tracking, due to factors such as noise, illumination variation, background variations, to realize steady tracking,
It needs to adjust trace model and tracking strategy in time according to the variation of video.In practical applications, the not institute in feature point set
There is characteristic point that can be matched, some characteristic points or disappeared, or cannot be matched for a long time, new characteristic point can not yet
Disconnected to occur, quantity, the match condition of characteristic point can change, it is therefore desirable to update feature point set;Characteristic point is corresponding apparent
Information can change at any time, and corresponding apparent model should reflect its variation in time;The interframe movement of target and neighborhood background
Rule can also change, thus corresponding motion model should also timely update.
Specifically, the step of above-mentioned feature point set to target object and neighborhood background is updated, including:(1) basis
Matching result classifies to the characteristic point in feature point set, obtains the subset of multiple characteristic points;Wherein, subset includes matching
Successful characteristic point subset and the characteristic point subset that it fails to match;Further include on target object in the characteristic point subset of successful match
Characteristic point and neighborhood background on characteristic point;The characteristic point subset that it fails to match further includes characteristic point and neighbour on target object
Characteristic point in the background of domain;(2) it is deleted from the corresponding feature point set of former frame and is not matched number of success height in recent frame number
Characteristic point in the given threshold characteristic point subset that it fails to match;Wherein, recent frame number is the pervious setting number of former frame
The frame number of the continuous video frame of amount;(3) according to the tracking mode of present frame, the characteristic point in the feature point set of present frame is added
It adds in the corresponding feature point set of former frame;(4) position coordinates of characteristic point in the corresponding feature point set of former frame are updated to
The position coordinates of character pair point in present frame.
The flow chart that feature point set to target object and neighborhood background shown in Figure 9 is updated;Tracker exists
Feature point set Pg has had been established before arriving in t framest-1, including target signature point setWith background characteristics point setRespectively with
The feature point set PgD detected in t framestIt is matched, after matching, PgDtCharacteristic point in set should be classified as target spy
Sign point and background characteristics point, with feature point set Pgt-1New set of characteristic points Pg is collectively formedt, Pgt-1In Partial Feature point answer
It is eliminated, the characteristic point of reservation incorporates Pg after should updating its coordinate positiontIn.
The feature point set Pg established in t-1 frame endst-1It include the feature point set in targetAnd it is located at mesh
Mark the feature point set on neighborhood backgroundThis two category features point set is matched respectively, regardless of whether matching, is not changed
Become the categorical attribute of these characteristic points.The feature point set PgD detected in t framest, withWithAfter matching, matching at
Work(characteristic point is classified as target feature point respectivelyWith background characteristics pointBut still can exist
Partial Feature point does not have successful match, is denoted as Pg_newt, i.e.,:
Therefore it needs to be determined that feature vertex type is to detect in the current frame, but the not characteristic point Pg_ of successful match
newt.According to set Pg_newtThe position of middle characteristic point iTrack position and the range of obtained current goalAnd tracking mode, it will not matched set of characteristic points Pg_newtIt is classified as mesh
Mark and two class of background:
SetWithAgain respectively with the feature point set of previous frameWithMerge, finally obtains
The set of characteristic points of t frames
Present frame detected and the set of characteristic points Pg_new that is not matchedtOften emerging characteristic point, Ying Jiang
It is added to individual features point concentration, but feature point set is allowed infinitely to increase with video frame and infeasible, therefore it is generally necessary to pair
Characteristic point frame matching situation is analyzed, and stablizing relatively for characteristic point quantity is kept.
The number that each characteristic point can be matched within nearest a period of time reflects the corresponding image office of this feature point
Robustness of the information in portion region in nearest video.Nearest matching times are more, then show the image information of the regional area
It is more steady;Conversely, not matching for a long time recently, then illustrate that the image information of the regional area is easy the shadow by factors such as noises
It rings, it is more fragile.As previously mentioned, steady characteristic point, when in formula (16) with least-squares estimation motion model parameters
Larger weights should be assignedWith higher reliability, otherwise fragile characteristic point should assign smaller weights.Pass through
Arrange parameterCome Expressive Features point i t moment reliability.After for interframe Feature Points Matching operation, more
The parameter of new each characteristic pointFor matched characteristic point i, update method is:
For unmatched characteristic point i, coefficientIt is updated according to the following formula:
Wherein Inc and Dec is constant coefficient, and coefficientIt is the important evidence for deleting characteristic point, can be arranged
Inc is set as 1, Dec 0.5.
For not matched characteristic point, if correspondingIt is too small, then show characteristic point i for a long time not regarding
Occur in frequency, the image local information representated by this feature point may due to be blocked or the factors such as on-plane surface rotation no longer
It appears in video image, thus also showing to obey image local information described in this characteristic point almost without " evidence " will
Can occur again, whenWhen value is less than 0, this feature point is deleted from feature point set.
The set of characteristic points PgD detected in present frametWith feature point set Pgt-1When being matched, there is Partial Feature point
Pg_newtThere is no successful match, this Partial Feature point is the characteristic point newly increased, can be in target or background according to its position
Region is that (blocking completely) is lost in normal tracking, doubtful partial occlusion and tracking in conjunction with current tracking mode, respectively by it
Increase to background characteristics point set and target signature point set.
(a) under normal tracking condition, the classification of characteristic point is newly increased;If currently tracking obtained target location and ranging fromIf characteristic point in target zone, is increased to clarification of objective point setIn, otherwise it is classified as the feature point set for belonging to background
(b) under the conditions of partial occlusion, the classification of the characteristic point newly increased;As shown in fig. 6, in partial occlusion,
Some in the background characteristics point matched appears in the target zone of present frame, is denoted asIn present frame mesh
The characteristic point that marking can match in range and with previous frame includes target feature pointAnd background characteristics
PointTherefore whether just can not simply be increased in the range of target only according to characteristic point
Target feature point is concentrated.Nearest neighbor algorithm may be used at this time to classify to the characteristic point newly increased in target zone, i.e.,
Characteristic point i is the characteristic point for newly occurring and not being matched in target zone, then carries out characteristic point classification according to the following formula, be classified as
This feature space of points is apart from nearest classification:
Here function G_dis (i, Pg) indicates the space in the picture of each characteristic point in characteristic point i to feature point set Pg
Minimum distance on position.And for appearing in the characteristic point newly increased in background area, then all it is classified as background characteristics point set
(c) under the conditions of (blocking completely) is lost in tracking, the classification of the characteristic point newly increased;At this point, detecting in the current frame
To the feature point set that can be matched with target feature point in previous frame be combined into empty set, it is all to match with previous frame
Characteristic point is entirely to belong to the characteristic point of backgroundAll emerging characteristic points are also all classified as the back of the body
Scape feature point set
It is corresponding for each characteristic point emerging in the current frameInitial value Initial_M is assigned, it should
Initial value could be provided as 1:
By previous frame feature point set Pgt-1Coordinate position be updated to the coordinate in present frame, as previously mentioned, it can be divided into
Can be matched target and background characteristics point (With), some characteristic point
Can not it gather with the matching of present frameWithThe set of characteristic points that can be matchedWithPosition be matched characteristic point position in the current frame, and fail
The characteristic point enough matched is successively decreased according to formula (23), and its is correspondingThere is Partial Feature point because of itValue
It is eliminated less than defined threshold value after successively decreasing, however still having some to have not been able to matched characteristic point can not be eliminated.This portion
The equation of motion estimated according to formula (15) is updated by point coordinate position of the characteristic point in a new frame.Eliminate part
Characteristic point, and update the set Pg after coordinate positiont-1Fail matched feature point set with present frameWith
Merge obtained new feature point set Pgt。
The step of above-mentioned appearance features to target object and neighborhood background are updated, including:According to successful match
In characteristic point subset, the feature description subvector of characteristic point, scale factor characteristic information and color, texture and marginal vectors,
Update the mean value and variance of the Gaussian component of characteristic point.
As previously mentioned, the embodiment of the present invention was changed with time using Gaussian distribution model come Expressive Features point is apparent
Journey passes through mean μ and variances sigma descriptive model.At the beginning be feature vector correspondence mean value and variance assignment, that is, model just
Beginningization, and be the update of model according to the corresponding mean value of Feature Points Matching situation update apparent model and variance.Appearance features
Interframe variation be caused by due to factors such as additive noise, illumination variations, in practical applications, a small target image model
In enclosing, by experimental analysis, it is believed that the variation of the additive noise at different location is consistent whithin a period of time, you can
It is identical or is not much different is approximately considered the noise variance at different images position.Thus we are approximately considered in target
And its variation obedience of the feature vector corresponding to each characteristic point detected in contiguous range in the video frame is mutually homoscedastic
Gaussian Profile.Hereafter the initialization of Gauss model and more new strategy are based on this hypothesis, it is believed that the spy being located in target zone
Its corresponding feature vector of sign point has same variance yields, similarly, is located at the feature of the characteristic point within the scope of neighborhood background
Vector variance yields also having the same.
It arrives in first frame or when detect new characteristic point, as shown in formula (4), the characteristic point mould that newly detects
The mean value of type is initialized as detecting the character pair vector of specific characteristic point.At the first frame moment, each feature of apparent model to
The variance of amountIt can be initialized as a larger initial value, such as 0.9;It is newly detected during tracking
Characteristic point corresponds to apparent model due to " characteristic information process " variance having the same of different characteristic point feature vector
Mean value is initialized as this feature point feature vector value detected, and variance is initialized as current goal or background characteristics point is accordingly special
Levy the variance yields of vector.
After the completion of apparent model initialization, frame matching, line trace of going forward side by side are carried out to SURF characteristic points in new image frame
State analysis is on this basis updated the Gauss model of feature vector.May be used filtered based on autoregression it is online
EM approximation methods carry out training pattern.Moment t for feature vector j, not matched its corresponding Gaussian component of characteristic point
Mean valueAnd varianceIt remains unchanged, and the mean value and variance of matched Gaussian component
According to new observationIt is updated:
Wherein, parameter i indicates that the serial number of matching characteristic point, N then indicate the sum of matching characteristic point, this shows to calculate here
Variance out is the average variance of all matching characteristic point individual features vectors.Parameter ημAnd ησIt is newer for mean and variance
Studying factors are typically distributed between 0 to 1, which determines mean value and the variance constant variation at any time of Gaussian Profile
Speed, the renewal process of the mean and variance of such Gaussian Profile is regarded as the knot to the cause and effect low-pass filtering of previous parameter
Fruit.Usually when model foundation is initial, it is desirable to which model can be established and be restrained as early as possible, generally select larger at this time
The factor is practised, model can be set up quickly.After this, model should be able to be more stable, ensures previous picture number
It has a certain impact according to model tool, " feature vector " variation that the model being built such that can reflect in certain time is gone through
At this moment history should select a smaller Studying factors, to improve the robustness of the noise of model pair.
Therefore for the learning parameter η of model mean valueμIt is configured as follows:
It is similar, the undated parameter η of model varianceσIt is set as:
Wherein, CkμFor the counting how many times that each characteristic point is matched, and CkσThe picture frame being matched for existing characteristics point
It counts.In the model initialization stage, CkμOr CkσSmaller, model convergence rate is than very fast.After matching for the first time, parameter ημ's
It is arranged and model mean value is made to be arranged to current observation, and after being matched at second, parameter ησSetting explanation, model
The difference of feature vector when variance is set to match for the first time and second matches.Over time, CkμWith CkσIncreasingly
Greatly, Current observation value gradually reduces the contribution of model modification, but if if Studying factors go to zero, can so that model is abnormal
Stablize, cannot reflect the normal variation of image information, thus minimum value thrd μ and thrd provided with right value update coefficient in time
σ, thrd μ and thrd σ could be provided as 0.2.
In addition, if the variance of Gaussian componentIt is too small, it be easy to cause in characteristic point frame matching
In the process, due to excessively sensitive to noise, it should which matched characteristic point can not be matched correctly.Therefore to all Gaussian components
VarianceProvide lower limit, such as Tσ=0.05 to enhance the robustness of system.
The step of above-mentioned interframe movement parameter to target object and neighborhood background is updated, including:According to present frame
With the estimated value of the interframe movement transformation parameter of former frame, the mean value and variance of kinematic parameter are updated.
Movement of the target object within nearest a period of time is described, only only in accordance with Feature Points Matching in mean square error
The current interframe motion transform parameter Par estimated under poor least meaningt=(uxt,uyt,ρt,θt) it is inadequate, it needs to mesh
Corresponding motion model is established in the movement in mark and its field.The motion process of same this interframe can also be carried out with Gaussian Profile
Description, as it is assumed that the interframe deformation very little of target, therefore the movement of characteristic point and the movement of target have high consistency,
The interframe movement of each characteristic point can be approximately considered and all obey identical kinematic parameter.It, can be to reduce computational complexity
Target and background area feature point setWithIn the motion model of each characteristic point use the movement of target and neighborhood background respectively
Gauss model simplified, motion transform model is respectively established to target and its neighborhood background region.
Online EM approximation methods are equally used to the update of the model, moment t foundation, current interframe motion transform parameter
Estimated value Part={ mt, updates of the m ∈ (ux, uy, ρ, θ) to the mean value and variance of kinematic parameter m:
The Studying factors η of model modification1Also it is similarly set to:
Same CkmFor the counting for the picture frame that existing characteristics point is matched.Model Mean ParametersIt is first
Beginning turns to (0,0,1,0), that is, thinks that target and its neighborhood background are static no any spatial position changes, first frame arrives
Afterwards, the mean value of model is initialized as the kinematic parameter Par that present frame detectst=(uxt,uyt,ρt,θt), it arrives in the second frame
Afterwards, by the variance of modelIt is initialized as the difference for the transformation parameter that first frame is detected with the second frame.Initially
Stage, CkmIt is smaller, so that model is restrained as early as possible, hereafter η1Keep constant, therefore can be arranged thrdm be 0.1, allow model with
Stable speed is updated.Equally, if the movement of interframe is highly uniform whithin a period of time, according to variance more new formula,
It can make the variance of Gaussian componentVery little, in this case, once interframe movement varies slightly,
It can cause during characteristic point frame matching, it should which matched characteristic point can not be matched correctly, it is therefore desirable to varianceSame regulation lower limit Tσ={ 1,1,0.01,0.01 } is to enhance the robustness of system.
During tracking, in addition to normal tracking, the different tracking mode such as drift about, lose, blocking inevitably occurs, for
Different tracking modes should take corresponding different tracking strategy into line trace to ensure the robustness and robustness of algorithm.Mesh
Model is marked, including the range estimation that apparent model and motion model and next frame target are likely to occur is to influence track algorithm
The important tracking strategy and key factor of robustness and robustness.In the case of normal tracking, target apparent model and movement
Model does not mutate, therefore model modification, and next frame target zone method of estimation is above-mentioned already described by the embodiment of the present invention
Model update method carries out, however, in tracking drift, losing, under circumstance of occlusion.It can not accurate target position and range or accurate
It observes target, under such abnormal tracking mode, the tracking strategy such as target apparent model, motion model should be adjusted in time.
Therefore, the step of tracking strategy of above-mentioned update target object, can specifically be accomplished in the following manner:
(1) target occlusion is handled;In the case where target occurs partial occlusion or blocks completely, to target appearance features
The observation of information will be affected, and in feature point set, the parameter for the corresponding characteristic model of characteristic point that can be matched can basis
Matched observation is updated by formula (26), (27) in present frame, is unable to the mould corresponding to the characteristic point being matched
Shape parameter (mean and variance) remains unchanged, corresponding importance parameterAlso it remains unchanged.Feelings are blocked completely
Under condition, the appearance features model parameter of target is constant, particularly, the corresponding importance parameter of target signature point featureIt remains unchanged.In partial occlusion, still may by the position of matched local feature point location target and
Range.In the case where blocking or tracking loss situation completely, there is no characteristic point that can be matched in target at this time, can not observe mesh
Mark, can not naturally also observe motion model transformation parameter Part=(uxt,uyt,ρt,θt), it is accurately fixed to be carried out to target
Position.The priori that tracker moves in the video frame according to target at this time estimates the position of target in present frame and range
Meter, it is believed that model remains a constant speed movement, the Mean Parameters of motion modelIt remains unchanged.Partial occlusion
When, position that still can be by matched local feature point location target and range determine next frame target detection by public (5), (6)
Range.When blocking or tracking loss completely, the thrdU values of bigger are assigned to formula (5), (6), enabling more at one
SURF characteristic points are detected in big range, to track target.
(2) target drift is handled;When occur tracking drift when, at this time determined by target position and range not
It is very accurate, so if fully according to current tracking result update apparent model and motion model, may be introduced to model
Larger error causes error gradually to accumulate to influence later tracking result, and drift is more and more, this is also most of
Tracking drift can gradually develop into the reason of tracking failure.Therefore when judging that tracking drift has occurred, usually stop more
New apparent and motion model parameter, the state of target in the current frame are counted according to the historical experience representated by its motion model
It calculates.The characteristic point detection range of next frame is determined, can should still be taken according to formula (5), (6) progress, but parameter thrdU larger
Value.When judging that target following is correct, characteristic point can be carried out in a relatively small range in the next frame
Detection, otherwise will in a larger range carry out characteristic point detection.
A kind of video target tracking method provided in an embodiment of the present invention, after initializing tracking parameter, in the image of setting
In range, the feature point set detected in present frame screens feature point set according to preset screening conditions;Further according to sieve
The feature point set of target object corresponding with former frame and neighborhood background is matched feature point set after choosing respectively;And then basis
Characteristic point after screening carries out estimation, according to the center of characteristic point and target object after screening to target object
Distance and target object appearance features, in present frame target object into line trace status analysis;Last basis
With result, motion estimation result and tracking status analysis as a result, feature point set, target object to target object and neighborhood background
It is updated with the interframe movement parameter of the appearance features of neighborhood background, target object and neighborhood background, to more fresh target pair
The tracking strategy of elephant;In which, tracking result can not only reflect the position of target object in time, additionally it is possible to accurately reflect mesh
The range and rotation angle for marking object can make the tracking of video frame target object have preferable robustness and robustness, together
When computation complexity it is relatively low, realize tracking robustness and arithmetic speed taken into account.
Target following is the key core technology for carrying out the intelligent videos equipment such as video behavioural analysis, human-computer interaction;Part
The one kind of feature as characteristics of image can have natural robustness to the partial occlusion of target, and stable local feature can
Using as to target carry out robust tracking foundation.SURF characteristic points are improved from what is quickly calculated SIFT feature, are led to
It crosses optimization and greatly improves calculating speed, while remaining SIFT feature accurate positioning, it is insensitive to illumination variation, there is rotation
The advantages of invariance.The Local Extremum for obtaining and stablizing in image is detected by SURF characteristic points, is accurately positioned as to target
Foundation, to realize efficient video frequency object tracking.
Based on this, the embodiment of the present invention additionally provides another video target tracking method, as shown in Figure 10, this method
It is properly termed as one kind and being based on the matched video target tracking method of local feature region, this method is made of following steps:1.
Initial phase establishes the model of target and its neighborhood background;2. positioning target in a new frame, pass through interframe characteristic point
With target state (target location, range and rotation angle) in the current frame is obtained, tracking result is obtained;3. being tied according to tracking
Fruit is updated model.Method is divided into initial phase and target following and model modification stage.
In initial phase, the first state of initialized target, i.e. target position in the current frame, range, angle are used
Rectangle frame indicates the location and range of target, further initializes the range in its neighborhood background region;Then, on this basis
Target and its SURF characteristic points of neighborhood are detected, and is initialized respectively according to the characteristic point detected and establishes target and its neighborhood back of the body
The model of scape establishes the initial model in target and its neighborhood background region;It is considered that the interframe movement of target can be with translating, enclose
Rotation and scaling around target geometric center describe, the interframe movement parameter of initialized target and its neighborhood background.
In target following positioning stage, after a new frame arrives, according to the historical knowledge of target movement, in new frame figure
SURF characteristic points are detected in the certain area of picture, and according to the object module and its neighborhood background model established to SURF features
Point is matched, searching can correct matched characteristic point pair, and with this calculate target and its neighborhood background interframe movement
Parameter, so that it is determined that position, range and rotation angle of the target in a new frame, analyze currently available mesh on this basis
Mark state judges whether that the situations such as tracking loss, drift have occurred, to obtain final tracking result.In the update rank of model
Section, according to tracking result and to the analysis of tracking mode (whether track accurate, drift about, lose, be blocked) using different
The model of policy update target and its neighborhood background.
Above-mentioned video target tracking method improves the robustness and robustness of tracking, have it is stronger it is anti-block, anti-noise
The ability of sound and mixed and disorderly background;Tracking result can not only reflect the position of target in time, can also reflect target areas imaging and
It is rotationally-varying;The method of Feature Points Matching is used into line trace, the best likelihood of search object module is avoided, reduces meter
Calculate complexity.
Corresponding to above method embodiment, a kind of structural schematic diagram of video frequency object tracking device shown in Figure 11;
The device includes:Initialization module 110, for initializing tracking parameter;Tracking parameter include at least target object position and
The interframe movement parameter of range, target object and neighborhood background, the feature point set of target object and neighborhood background;Target object and
It is a variety of in the appearance features of neighborhood background;Screening module 111, in the image range of setting, detecting in present frame
Feature point set screens feature point set according to preset screening conditions;Feature point set includes that characteristic point and characteristic point correspond to
Feature vector;Feature Points Matching module 112, for according to the target pair corresponding with former frame respectively of the feature point set after screening
As the feature point set with neighborhood background is matched;Motion estimation module 113 is used for according to the characteristic point after screening, to target
Object carries out estimation;Status analysis module 114 is tracked, for the centre bit according to characteristic point and target object after screening
The appearance features of the distance and target object set, in present frame target object into line trace status analysis;Update module
115, it is used for according to matching result, motion estimation result and tracking status analysis as a result, the spy to target object and neighborhood background
Sign point set, the appearance features of target object and neighborhood background, the interframe movement parameter of target object and neighborhood background are updated,
To update the tracking strategy of target object.
Above-mentioned initialization module, is additionally operable to:It extracts in present frame, the appearance features of target object and neighborhood background;It is apparent
Feature includes at least more in feature description subvector, scale factor characteristic information, color characteristic, textural characteristics and edge feature
Kind;Determine the length and width of the center and target rectangle frame of target object;By the interframe of target object and neighborhood background
Kinematic parameter is initialized as the difference of the correspondent transform parameter between present frame and former frame;The feature point set of target object is initial
Turn to the feature point set detected in the rectangle frame of target object;By the feature point set of neighborhood background be initialized as target object with
The feature point set of the neighborhood background detected in adjacent domain in outer preset range;By the table of target object and neighborhood background
See the feature vector that feature is initialized as the appearance features extracted.
The present embodiment additionally provides a kind of a kind of video frequency object tracking realization device corresponding with above method embodiment.
Figure 12 is the structural schematic diagram of the video frequency object tracking realization device;The device includes memory 100 and processor 101;Wherein,
Memory 100 is executed by processor for storing one or more computer instruction, one or more computer instruction, to realize
Above-mentioned video target tracking method, this method may include one or more in above method.
Further, a kind of video frequency object tracking realization device answered shown in Figure 12 further includes bus 102 and communication interface
103, processor 101, communication interface 103 and memory 100 are connected by bus 102.Wherein, memory 100 may include height
Fast random access memory (RAM, Random Access Memory), it is also possible to further include non-labile memory (non-
Volatile memory), a for example, at least magnetic disk storage.By at least one communication interface 103 (can be it is wired or
Person is wireless) realize communication connection between the system network element and at least one other network element, it can use internet, wide area network,
Local network, Metropolitan Area Network (MAN) etc..Bus 102 can be isa bus, pci bus or eisa bus etc..The bus can be divided into address
Bus, data/address bus, controlling bus etc..It for ease of indicating, is only indicated with a four-headed arrow in Figure 12, it is not intended that only
A piece bus or a type of bus.
Processor 101 may be a kind of IC chip, the processing capacity with signal.It is above-mentioned during realization
Each step of method can be completed by the integrated logic circuit of the hardware in processor 101 or the instruction of software form.On
The processor 101 stated can be general processor, including central processing unit (Central Processing Unit, abbreviation
CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (Digital
Signal Processing, abbreviation DSP), application-specific integrated circuit (Application Specific Integrated
Circuit, abbreviation ASIC), ready-made programmable gate array (Field-Programmable Gate Array, abbreviation FPGA) or
Person other programmable logic device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute sheet
Disclosed each method, step and logic diagram in open embodiment.General processor can be microprocessor or the processing
Device can also be any conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present disclosure, can be embodied directly in
Hardware decoding processor executes completion, or in decoding processor hardware and software module combination execute completion.Software mould
Block can be located at random access memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable storage
In the storage medium of this fields such as device, register maturation.The storage medium is located at memory 100, and processor 101 reads memory
Information in 100, in conjunction with its hardware complete previous embodiment method the step of.
The embodiment of the present invention additionally provides a kind of machine readable storage medium, which is stored with machine
Executable instruction, for the machine-executable instruction when being called and being executed by processor, machine-executable instruction promotes processor real
Existing above-mentioned video target tracking method, specific implementation can be found in embodiment of the method, and details are not described herein.
A kind of video target tracking method, device and realization device provided in an embodiment of the present invention, it is proposed that one kind is based on
The target following positioning system of SURF frame matchings, including multicharacteristic information extraction and adaptive integration technology, characteristic information are more
New technology;Wherein, this feature information update technology includes feature point set update, apparent model update, motion model update
And tracking strategy adjustment;With following advantage:(1) it proposes and is deeply examined under the detection of SURF characteristic points and frame matching frame
Consider and organically combined multiple features fusion, target and neighborhood background modeling, target following positioning, model modification, tracking mode inspection
Multiple key links such as survey, make a complete tracking system, realize set the goal to video middle finger, robust persistently with
Track.(2) tracker that the present invention designs accurately estimates target in present frame according to SURF characteristic point frame matching situations
Kinematic parameter, while the displacement for accurately estimating target, the range and rotation angle of target, avoid tradition tracking calculate
The complicated search process of method, reduces computation complexity.(3) pass through multiple features fusion, characteristic point classification, the concatenated feature of classification
The combination of the links such as the design of point matching process, tracking mode analysis, model modification improves the robustness of tracker so that tracking
Device is blocking, can realize robust, steady tracking under the complex scenes such as mixed and disorderly background, low signal-to-noise ratio.
The computer program product of video frequency object tracking device method, apparatus and system that the embodiment of the present invention is provided,
Computer readable storage medium including storing program code, the instruction that said program code includes can be used for executing front side
Method described in method embodiment, specific implementation can be found in embodiment of the method, and details are not described herein.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
Finally it should be noted that:Embodiment described above, only specific implementation mode of the invention, to illustrate the present invention
Technical solution, rather than its limitations, scope of protection of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair
It is bright to be described in detail, it will be understood by those of ordinary skill in the art that:Any one skilled in the art
In the technical scope disclosed by the present invention, it can still modify to the technical solution recorded in previous embodiment or can be light
It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make
The essence of corresponding technical solution is detached from the spirit and scope of technical solution of the embodiment of the present invention, should all cover the protection in the present invention
Within the scope of.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. a kind of video target tracking method, which is characterized in that including:
Initialize tracking parameter;The tracking parameter includes at least location and range, the target object and the neighbour of target object
The interframe movement parameter of domain background, the feature point set of the target object and the neighborhood background;The target object and described
It is a variety of in the appearance features of neighborhood background;
In the image range of setting, the feature point set in present frame is detected, according to preset screening conditions, to the characteristic point
Collection is screened;The feature point set includes characteristic point and the corresponding feature vector of the characteristic point;
According to the spy of the feature point set after screening corresponding with the former frame target object and the neighborhood background respectively
Sign point set is matched;
According to the characteristic point after screening, estimation is carried out to the target object;
According to the characteristic point after screening at a distance from the center of the target object and the table of the target object
Feature is seen, to target object described in present frame into line trace status analysis;
According to matching result, motion estimation result and tracking status analysis as a result, to the target object and the neighborhood background
Feature point set, the target object and the neighborhood background appearance features, the interframe of the target object and neighborhood background
Kinematic parameter is updated, to update the tracking strategy of the target object.
2. according to the method described in claim 1, it is characterized in that, the step of the initialization tracking parameter, including:
It extracts in present frame, the appearance features of the target object and the neighborhood background;The appearance features include at least spy
It is a variety of in sign description subvector, scale factor characteristic information, color characteristic, textural characteristics and edge feature;
Determine the length and width of the center and target rectangle frame of the target object;
By pair of the interframe movement parameter initialization of the target object and the neighborhood background between present frame and former frame
Answer the difference of transformation parameter;
The feature point set of the target object is initialized as in the rectangle frame of the target object, the characteristic point detected
Collection;It is examined in the adjacent domain that the feature point set of the neighborhood background is initialized as in the preset range other than the target object
The feature point set of the neighborhood background measured;
The feature for the appearance features that the appearance features of the target object and the neighborhood background are initialized as extracting
Vector.
3. according to the method described in claim 1, it is characterized in that, described in the image range of setting, detect in present frame
Feature point set, according to preset screening conditions, the step of screening to the feature point set, including:
Determine the top left co-ordinate and bottom right angular coordinate of the image rectangle frame of image to be detected range;
In described image rectangle frame, characteristic point detection is carried out, the coordinate of the characteristic point is obtained;
Calculate the mark and the corresponding feature vector of the characteristic point of the Hessian matrixes of the characteristic point;Described eigenvector
Including feature description subvector, scale factor characteristic information and color, texture and marginal vectors;
According to following screening conditions, the characteristic point in the feature point set is screened:
The Hessian trace of a matrix jack per lines of the mark of the Hessian matrixes of the characteristic point and characteristic point in former frame video frame;
The characteristic point is less than preset distance threshold at a distance from characteristic point in former frame video frame;
The characteristic point meets preset feature vector with characteristic point in former frame video frame, the Euclidean distance of character pair vector
Threshold value;
The shift length, direction of displacement and relative position relation of the characteristic point and characteristic point in former frame video frame meet default
Displacement consistency threshold value;
When the characteristic point of the characteristic point and former frame video frame is multiple matching relationships to one, from multiple features
The characteristic point of Euclidean distance minimum is screened in point.
4. according to the method described in claim 1, it is characterized in that, the characteristic point according to after screening and the target
The appearance features of the distance of the center of object and the target object, the progress to target object described in present frame
The step of tracking status analysis, including:
According to the characteristic point at a distance from the center of the target object, detection is rejected by the characteristic point of mistake classification
The characteristic point by mistake classification generates fisrt feature point set;
The appearance features of each characteristic point are concentrated according to the fisrt feature point, whether analyze target object described in current video frame
Tracking drift occurs.
5. according to the method described in claim 1, it is characterized in that, the spy to the target object and the neighborhood background
The step of sign point set is updated, including:
According to matching result, classify to the characteristic point in the feature point set, obtains the subset of multiple characteristic points;Wherein,
The subset includes the characteristic point subset of successful match and the characteristic point subset that it fails to match;The feature idea of the successful match
Concentration further includes the characteristic point in characteristic point and neighborhood background on target object;The characteristic point subset that it fails to match is also wrapped
Include the characteristic point in the characteristic point and neighborhood background on target object;
It is deleted from the corresponding feature point set of former frame and is not matched number of success in recent frame number higher than given threshold
Characteristic point in the characteristic point subset that it fails to match;Wherein, the recent frame number is the pervious setting number of the former frame
The frame number of the continuous video frame of amount;
According to the tracking mode of present frame, the characteristic point in the feature point set of present frame is added to the corresponding institute of former frame
It states in feature point set;
The position coordinates of characteristic point in the corresponding feature point set of former frame are updated to the position of character pair point in present frame
Set coordinate.
6. according to the method described in claim 5, it is characterized in that, the table to the target object and the neighborhood background
The step of feature is updated is seen, including:
According in the characteristic point subset of the successful match, the feature description subvector of characteristic point, scale factor characteristic information, with
And color, texture and marginal vectors, update the mean value and variance of the Gaussian component of the characteristic point.
7. according to the method described in claim 1, it is characterized in that, the interframe to the target object and neighborhood background is transported
The step of dynamic parameter is updated, including:
According to the estimated value of present frame and the interframe movement transformation parameter of former frame, mean value and variance to kinematic parameter carry out more
Newly.
8. a kind of video frequency object tracking device, which is characterized in that including:
Initialization module, for initializing tracking parameter;The tracking parameter includes at least the location and range of target object, institute
State the interframe movement parameter of target object and neighborhood background, the feature point set of the target object and the neighborhood background;It is described
It is a variety of in the appearance features of target object and the neighborhood background;
Screening module, in the image range of setting, the feature point set in present frame being detected, according to preset screening item
Part screens the feature point set;The feature point set includes characteristic point and the corresponding feature vector of the characteristic point;
Feature Points Matching module, for according to the target object corresponding with former frame respectively of the feature point set after screening
It is matched with the feature point set of the neighborhood background;
Motion estimation module, for according to the characteristic point after screening, estimation to be carried out to the target object;
Track status analysis module, for according to the center of the characteristic point and the target object after screening away from
From and the target object appearance features, to target object described in present frame into line trace status analysis;
Update module, for according to matching result, motion estimation result and tracking status analysis as a result, to the target object and
The feature point set of the neighborhood background, the appearance features of the target object and the neighborhood background, the target object and neighbour
The interframe movement parameter of domain background is updated, to update the tracking strategy of the target object.
9. device according to claim 8, which is characterized in that the initialization module is additionally operable to:
It extracts in present frame, the appearance features of the target object and the neighborhood background;The appearance features include at least spy
It is a variety of in sign description subvector, scale factor characteristic information, color characteristic, textural characteristics and edge feature;
Determine the length and width of the center and target rectangle frame of the target object;
By pair of the interframe movement parameter initialization of the target object and the neighborhood background between present frame and former frame
Answer the difference of transformation parameter;
The characteristic point that the feature point set of the target object is initialized as detecting in the rectangle frame of the target object
Collection;It is examined in the adjacent domain that the feature point set of the neighborhood background is initialized as in the preset range other than the target object
The feature point set of the neighborhood background measured;
The feature for the appearance features that the appearance features of the target object and the neighborhood background are initialized as extracting
Vector.
10. a kind of video frequency object tracking realization device, which is characterized in that described including processor and machine readable storage medium
Machine readable storage medium is stored with the machine-executable instruction that can be executed by the processor, described in the processor executes
Machine-executable instruction is to realize claim 1 to 7 any one of them method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810249416.5A CN108470354B (en) | 2018-03-23 | 2018-03-23 | Video target tracking method and device and implementation device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810249416.5A CN108470354B (en) | 2018-03-23 | 2018-03-23 | Video target tracking method and device and implementation device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108470354A true CN108470354A (en) | 2018-08-31 |
CN108470354B CN108470354B (en) | 2021-04-27 |
Family
ID=63264696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810249416.5A Expired - Fee Related CN108470354B (en) | 2018-03-23 | 2018-03-23 | Video target tracking method and device and implementation device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108470354B (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898615A (en) * | 2018-06-15 | 2018-11-27 | 阿依瓦(北京)技术有限公司 | Block matching method for high-frequency information image |
CN109255337A (en) * | 2018-09-29 | 2019-01-22 | 北京字节跳动网络技术有限公司 | Face critical point detection method and apparatus |
CN109323697A (en) * | 2018-11-13 | 2019-02-12 | 大连理工大学 | A method of particle fast convergence when starting for Indoor Robot arbitrary point |
CN109827578A (en) * | 2019-02-25 | 2019-05-31 | 中国人民解放军军事科学院国防科技创新研究院 | Satellite relative attitude estimation method based on profile similitude |
CN110111361A (en) * | 2019-04-22 | 2019-08-09 | 湖北工业大学 | A kind of moving target detecting method based on multi-threshold self-optimizing background modeling |
CN110415275A (en) * | 2019-04-29 | 2019-11-05 | 北京佳讯飞鸿电气股份有限公司 | Point-to-point-based moving target detection and tracking method |
CN111144483A (en) * | 2019-12-26 | 2020-05-12 | 歌尔股份有限公司 | Image feature point filtering method and terminal |
CN111160266A (en) * | 2019-12-30 | 2020-05-15 | 三一重工股份有限公司 | Object tracking method and device |
CN111382309A (en) * | 2020-03-10 | 2020-07-07 | 深圳大学 | Short video recommendation method based on graph model, intelligent terminal and storage medium |
CN111652263A (en) * | 2020-03-30 | 2020-09-11 | 西北工业大学 | Self-adaptive target tracking method based on multi-filter information fusion |
CN111882583A (en) * | 2020-07-29 | 2020-11-03 | 成都英飞睿技术有限公司 | Moving target detection method, device, equipment and medium |
CN112053381A (en) * | 2020-07-13 | 2020-12-08 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112085025A (en) * | 2019-06-14 | 2020-12-15 | 阿里巴巴集团控股有限公司 | Object segmentation method, device and equipment |
CN112184769A (en) * | 2020-09-27 | 2021-01-05 | 上海高德威智能交通系统有限公司 | Tracking abnormity identification method, device and equipment |
CN112184766A (en) * | 2020-09-21 | 2021-01-05 | 广州视源电子科技股份有限公司 | Object tracking method and device, computer equipment and storage medium |
CN112200126A (en) * | 2020-10-26 | 2021-01-08 | 上海盛奕数字科技有限公司 | Method for identifying limb shielding gesture based on artificial intelligence running |
CN112215205A (en) * | 2020-11-06 | 2021-01-12 | 腾讯科技(深圳)有限公司 | Target identification method and device, computer equipment and storage medium |
EP3798975A1 (en) * | 2019-09-29 | 2021-03-31 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for detecting subject, electronic device, and computer readable storage medium |
CN113012216A (en) * | 2019-12-20 | 2021-06-22 | 舜宇光学(浙江)研究院有限公司 | Feature classification optimization method, SLAM positioning method, system thereof and electronic equipment |
CN113450578A (en) * | 2021-06-25 | 2021-09-28 | 北京市商汤科技开发有限公司 | Traffic violation event evidence obtaining method, device, equipment and system |
CN113822279A (en) * | 2021-11-22 | 2021-12-21 | 中国空气动力研究与发展中心计算空气动力研究所 | Infrared target detection method, device, equipment and medium based on multi-feature fusion |
US11538177B2 (en) * | 2018-12-28 | 2022-12-27 | Tsinghua University | Video stitching method and device |
US11915431B2 (en) * | 2015-12-30 | 2024-02-27 | Texas Instruments Incorporated | Feature point identification in sparse optical flow based tracking in a computer vision system |
CN118015501A (en) * | 2024-04-08 | 2024-05-10 | 中国人民解放军陆军步兵学院 | Medium-low altitude low-speed target identification method based on computer vision |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050114103A1 (en) * | 2003-09-09 | 2005-05-26 | Bohyung Han | System and method for sequential kernel density approximation through mode propagation |
US20110262054A1 (en) * | 2006-06-26 | 2011-10-27 | General Electric Company | System and method for iterative image reconstruction |
CN102999920A (en) * | 2012-10-25 | 2013-03-27 | 西安电子科技大学 | Target tracking method based on nearest neighbor classifier and mean shift |
CN103400395A (en) * | 2013-07-24 | 2013-11-20 | 佳都新太科技股份有限公司 | Light stream tracking method based on HAAR feature detection |
CN103870839A (en) * | 2014-03-06 | 2014-06-18 | 江南大学 | Online video target multi-feature tracking method |
CN103886611A (en) * | 2014-04-08 | 2014-06-25 | 西安煤航信息产业有限公司 | Image matching method suitable for automatically detecting flight quality of aerial photography |
CN103985136A (en) * | 2014-03-21 | 2014-08-13 | 南京大学 | Target tracking method based on local feature point feature flow pattern |
CN105046717A (en) * | 2015-05-25 | 2015-11-11 | 浙江师范大学 | Robust video object tracking method |
-
2018
- 2018-03-23 CN CN201810249416.5A patent/CN108470354B/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050114103A1 (en) * | 2003-09-09 | 2005-05-26 | Bohyung Han | System and method for sequential kernel density approximation through mode propagation |
US20110262054A1 (en) * | 2006-06-26 | 2011-10-27 | General Electric Company | System and method for iterative image reconstruction |
CN102999920A (en) * | 2012-10-25 | 2013-03-27 | 西安电子科技大学 | Target tracking method based on nearest neighbor classifier and mean shift |
CN103400395A (en) * | 2013-07-24 | 2013-11-20 | 佳都新太科技股份有限公司 | Light stream tracking method based on HAAR feature detection |
CN103870839A (en) * | 2014-03-06 | 2014-06-18 | 江南大学 | Online video target multi-feature tracking method |
CN103985136A (en) * | 2014-03-21 | 2014-08-13 | 南京大学 | Target tracking method based on local feature point feature flow pattern |
CN103886611A (en) * | 2014-04-08 | 2014-06-25 | 西安煤航信息产业有限公司 | Image matching method suitable for automatically detecting flight quality of aerial photography |
CN105046717A (en) * | 2015-05-25 | 2015-11-11 | 浙江师范大学 | Robust video object tracking method |
Non-Patent Citations (2)
Title |
---|
HUIBIN WANG等: "Multiple Feature Fusion for Tracking of Moving Objects in Video Surveillance", 《2008 INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTERNATIONAL AND SECURITY》 * |
齐苑辰: "复杂动态场景下在线视觉目标跟踪算法研究", 《中国博士学位论文全文数据库》 * |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11915431B2 (en) * | 2015-12-30 | 2024-02-27 | Texas Instruments Incorporated | Feature point identification in sparse optical flow based tracking in a computer vision system |
CN108898615A (en) * | 2018-06-15 | 2018-11-27 | 阿依瓦(北京)技术有限公司 | Block matching method for high-frequency information image |
CN109255337A (en) * | 2018-09-29 | 2019-01-22 | 北京字节跳动网络技术有限公司 | Face critical point detection method and apparatus |
CN109323697A (en) * | 2018-11-13 | 2019-02-12 | 大连理工大学 | A method of particle fast convergence when starting for Indoor Robot arbitrary point |
CN109323697B (en) * | 2018-11-13 | 2022-02-15 | 大连理工大学 | Method for rapidly converging particles during starting of indoor robot at any point |
US11538177B2 (en) * | 2018-12-28 | 2022-12-27 | Tsinghua University | Video stitching method and device |
CN109827578A (en) * | 2019-02-25 | 2019-05-31 | 中国人民解放军军事科学院国防科技创新研究院 | Satellite relative attitude estimation method based on profile similitude |
CN109827578B (en) * | 2019-02-25 | 2019-11-22 | 中国人民解放军军事科学院国防科技创新研究院 | Satellite relative attitude estimation method based on profile similitude |
CN110111361A (en) * | 2019-04-22 | 2019-08-09 | 湖北工业大学 | A kind of moving target detecting method based on multi-threshold self-optimizing background modeling |
CN110111361B (en) * | 2019-04-22 | 2021-05-18 | 湖北工业大学 | Moving object detection method based on multi-threshold self-optimization background modeling |
CN110415275A (en) * | 2019-04-29 | 2019-11-05 | 北京佳讯飞鸿电气股份有限公司 | Point-to-point-based moving target detection and tracking method |
CN110415275B (en) * | 2019-04-29 | 2022-05-13 | 北京佳讯飞鸿电气股份有限公司 | Point-to-point-based moving target detection and tracking method |
CN112085025A (en) * | 2019-06-14 | 2020-12-15 | 阿里巴巴集团控股有限公司 | Object segmentation method, device and equipment |
CN112085025B (en) * | 2019-06-14 | 2024-01-16 | 阿里巴巴集团控股有限公司 | Object segmentation method, device and equipment |
US11538175B2 (en) | 2019-09-29 | 2022-12-27 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for detecting subject, electronic device, and computer readable storage medium |
EP3798975A1 (en) * | 2019-09-29 | 2021-03-31 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for detecting subject, electronic device, and computer readable storage medium |
CN113012216B (en) * | 2019-12-20 | 2023-07-07 | 舜宇光学(浙江)研究院有限公司 | Feature classification optimization method, SLAM positioning method, system and electronic equipment |
CN113012216A (en) * | 2019-12-20 | 2021-06-22 | 舜宇光学(浙江)研究院有限公司 | Feature classification optimization method, SLAM positioning method, system thereof and electronic equipment |
CN111144483B (en) * | 2019-12-26 | 2023-10-17 | 歌尔股份有限公司 | Image feature point filtering method and terminal |
CN111144483A (en) * | 2019-12-26 | 2020-05-12 | 歌尔股份有限公司 | Image feature point filtering method and terminal |
CN111160266B (en) * | 2019-12-30 | 2023-04-18 | 三一重工股份有限公司 | Object tracking method and device |
CN111160266A (en) * | 2019-12-30 | 2020-05-15 | 三一重工股份有限公司 | Object tracking method and device |
CN111382309A (en) * | 2020-03-10 | 2020-07-07 | 深圳大学 | Short video recommendation method based on graph model, intelligent terminal and storage medium |
CN111382309B (en) * | 2020-03-10 | 2023-04-18 | 深圳大学 | Short video recommendation method based on graph model, intelligent terminal and storage medium |
CN111652263B (en) * | 2020-03-30 | 2021-12-28 | 西北工业大学 | Self-adaptive target tracking method based on multi-filter information fusion |
CN111652263A (en) * | 2020-03-30 | 2020-09-11 | 西北工业大学 | Self-adaptive target tracking method based on multi-filter information fusion |
CN112053381A (en) * | 2020-07-13 | 2020-12-08 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111882583B (en) * | 2020-07-29 | 2023-11-14 | 成都英飞睿技术有限公司 | Moving object detection method, device, equipment and medium |
CN111882583A (en) * | 2020-07-29 | 2020-11-03 | 成都英飞睿技术有限公司 | Moving target detection method, device, equipment and medium |
CN112184766B (en) * | 2020-09-21 | 2023-11-17 | 广州视源电子科技股份有限公司 | Object tracking method and device, computer equipment and storage medium |
CN112184766A (en) * | 2020-09-21 | 2021-01-05 | 广州视源电子科技股份有限公司 | Object tracking method and device, computer equipment and storage medium |
CN112184769A (en) * | 2020-09-27 | 2021-01-05 | 上海高德威智能交通系统有限公司 | Tracking abnormity identification method, device and equipment |
CN112184769B (en) * | 2020-09-27 | 2023-05-02 | 上海高德威智能交通系统有限公司 | Method, device and equipment for identifying tracking abnormality |
CN112200126A (en) * | 2020-10-26 | 2021-01-08 | 上海盛奕数字科技有限公司 | Method for identifying limb shielding gesture based on artificial intelligence running |
CN112215205B (en) * | 2020-11-06 | 2022-10-18 | 腾讯科技(深圳)有限公司 | Target identification method and device, computer equipment and storage medium |
CN112215205A (en) * | 2020-11-06 | 2021-01-12 | 腾讯科技(深圳)有限公司 | Target identification method and device, computer equipment and storage medium |
CN113450578B (en) * | 2021-06-25 | 2022-08-12 | 北京市商汤科技开发有限公司 | Traffic violation event evidence obtaining method, device, equipment and system |
CN113450578A (en) * | 2021-06-25 | 2021-09-28 | 北京市商汤科技开发有限公司 | Traffic violation event evidence obtaining method, device, equipment and system |
CN113822279A (en) * | 2021-11-22 | 2021-12-21 | 中国空气动力研究与发展中心计算空气动力研究所 | Infrared target detection method, device, equipment and medium based on multi-feature fusion |
CN118015501A (en) * | 2024-04-08 | 2024-05-10 | 中国人民解放军陆军步兵学院 | Medium-low altitude low-speed target identification method based on computer vision |
CN118015501B (en) * | 2024-04-08 | 2024-06-11 | 中国人民解放军陆军步兵学院 | Medium-low altitude low-speed target identification method based on computer vision |
Also Published As
Publication number | Publication date |
---|---|
CN108470354B (en) | 2021-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108470354A (en) | Video target tracking method, device and realization device | |
JP6855098B2 (en) | Face detection training methods, equipment and electronics | |
Kristan et al. | The visual object tracking vot2015 challenge results | |
CN104424634B (en) | Object tracking method and device | |
Simo-Serra et al. | Single image 3D human pose estimation from noisy observations | |
Maggio et al. | Adaptive multifeature tracking in a particle filtering framework | |
CN106897670A (en) | A kind of express delivery violence sorting recognition methods based on computer vision | |
CN107784663A (en) | Correlation filtering tracking and device based on depth information | |
CN107633226B (en) | Human body motion tracking feature processing method | |
CN104881671B (en) | A kind of high score remote sensing image Local Feature Extraction based on 2D Gabor | |
CN106778474A (en) | 3D human body recognition methods and equipment | |
CN113592911B (en) | Apparent enhanced depth target tracking method | |
CN110263712A (en) | A kind of coarse-fine pedestrian detection method based on region candidate | |
CN108648211A (en) | A kind of small target detecting method, device, equipment and medium based on deep learning | |
Pound et al. | A patch-based approach to 3D plant shoot phenotyping | |
Barman et al. | Shape: A novel graph theoretic algorithm for making consensus-based decisions in person re-identification systems | |
CN108399627A (en) | Video interframe target method for estimating, device and realization device | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
Gündoğdu et al. | The visual object tracking VOT2016 challenge results | |
CN113128518B (en) | Sift mismatch detection method based on twin convolution network and feature mixing | |
Bode et al. | Bounded: Neural boundary and edge detection in 3d point clouds via local neighborhood statistics | |
CN107665495B (en) | Object tracking method and object tracking device | |
CN116884045A (en) | Identity recognition method, identity recognition device, computer equipment and storage medium | |
Ye et al. | Tiny face detection based on deep learning | |
CN108492328A (en) | Video interframe target matching method, device and realization device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210427 |
|
CF01 | Termination of patent right due to non-payment of annual fee |