CN107346538A - Method for tracing object and equipment - Google Patents
Method for tracing object and equipment Download PDFInfo
- Publication number
- CN107346538A CN107346538A CN201610295085.XA CN201610295085A CN107346538A CN 107346538 A CN107346538 A CN 107346538A CN 201610295085 A CN201610295085 A CN 201610295085A CN 107346538 A CN107346538 A CN 107346538A
- Authority
- CN
- China
- Prior art keywords
- current frame
- blocked
- frame image
- velocity vector
- movement velocity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
A kind of method for tracing object and equipment are provided, methods described includes:According to position in current frame image of the historical movement information prediction of the object object and the movement velocity vector of the opening position;Core correlation filtering is carried out along the movement velocity vector, with position of the detection object in current frame image;Judge whether object is blocked in the current frame;If be not blocked, using position of the position detected as the object in current frame image;Otherwise, to the position of prediction and carry out the testing result of core correlation filtering and carry out Bayes's checking, and using position of the Bayesian decision result as object in the current frame.The method for tracing object and equipment when object is blocked using the historical movement information prediction of the object object position and the position of prediction is modified by Bayes's checking, so that more reliable object's position can also be obtained by being blocked even if object, the accuracy of tracking is which thereby enhanced, object can be tracked for a long time.
Description
Technical field
The disclosure relates in general to image procossing, and in particular to method for tracing object and equipment.
Background technology
Object tracking is the basic function of image procossing, in multiple research sides such as video monitoring, man-machine interaction
To there is important application.In recent years, as the development of technology, Object tracking have been made significant headway.
But in actual applications, Object tracking is still faced with many challenges, such as light change in scene,
Object blocks, object itself outward appearance changes etc..
KCF (Kernel Correlation Filter, core correlation filtering) is a kind of new monotrack method,
This method arithmetic speed is fast, tracking performance is preferable.But in actual applications, KCF is equally faced with mesh
The size of mark object, which changes, destination object is blocked, track scene extremely complex etc. challenges.For example,
Fig. 1 shows the illustrative case being tracked using KCF in actual scene.The figure of the leftmost side in Fig. 1
It is middle with rectangle frame outline for the destination object to be tracked;It can be seen that in Fig. 1 in the figure of the rightmost side,
Tracking result mistake, destination object are lost.It can be seen that in dense environment, when destination object is blocked simultaneously
And when having other objects close with its outward appearance beside the destination object, KCF tracking can fail.
The content of the invention
According to the embodiment of an aspect of this disclosure, there is provided a kind of method for tracing object, including:Root
According to the motion of position in current frame image of the historical movement information prediction of the object object and the opening position
Velocity;Core correlation filtering is carried out along the movement velocity vector, to detect the object in present frame
Position in image;Judge whether the object is blocked in the current frame;It is if described in the current frame
Object is not blocked, then using position of the position detected as the object in current frame image;
Otherwise, to the position of prediction and carry out the testing result of core correlation filtering and carry out Bayes's checking, and by shellfish
Position of this result of decision of leaf as object in the current frame.
According to the embodiment of another aspect of the disclosure, there is provided a kind of object tracking device, including:
Predicting unit, it is configured to the position in current frame image according to the historical movement information prediction of the object object
Put and the movement velocity vector of the opening position;Detection unit, it is configured to carry out along the movement velocity vector
Core correlation filtering, to detect position of the object in current frame image;Judging unit, it is configured to sentence
Whether the object described in the current frame that breaks is blocked;Determining unit, if be configured to described in the current frame
Object is not blocked, then using position of the position detected as the object in current frame image;
Otherwise, to the position of prediction and carry out the testing result of core correlation filtering and carry out Bayes's checking, and by shellfish
Position of this result of decision of leaf as object in the current frame.
According to the embodiment of another aspect of the disclosure, there is provided a kind of object tracking device, including:
Processor;Memory;With the computer program instructions being stored in the memory.The computer journey
Sequence instruction performs following steps when being run by the processor:According to the historical movement information prediction of object
The movement velocity vector of position and the opening position of the object in current frame image;Along the movement velocity
Vector carries out core correlation filtering, to detect position of the object in current frame image;Judge current
Whether object described in frame is blocked;If the object is not blocked in the current frame, will detect
Position of the position as the object in current frame image;Otherwise, the position to prediction and progress
The testing result of core correlation filtering carries out Bayes's checking, and is working as Bayesian decision result as object
Position in previous frame.
Above-mentioned method for tracing object and equipment carry out Object tracking using KCF, and when object is blocked
Position using the historical movement information prediction of the object object and position by Bayes's checking to prediction
It is modified, so that more reliable object's position can also be obtained by being blocked even if object, thus carries
The high accuracy of tracking, enabling be tracked for a long time to object.
Brief description of the drawings
The embodiment of the present disclosure is described in more detail in conjunction with the accompanying drawings, the disclosure above-mentioned and its
Its purpose, feature and advantage will be apparent.Accompanying drawing is used for providing entering one to the embodiment of the present disclosure
Step understands, and a part for constitution instruction, is used to explain the disclosure together with the embodiment of the present disclosure,
The limitation to the disclosure is not formed.In the accompanying drawings, identical reference number typically represent same parts or
Step.
Fig. 1 shows the illustrative case being tracked using KCF in actual scene.
Fig. 2 shows the flow chart of the method for tracing object according to the embodiment of the present disclosure.
Fig. 3 shows in the method for tracing object according to the embodiment of the present disclosure and sweared along the movement velocity of prediction
Amount carries out the flow chart of KCF detections.
Fig. 4 shows the confidence level figure for representing that the exemplary detection result obtained is detected by KCF.
Fig. 5 shows the position and progress core in the method for tracing object according to the embodiment of the present disclosure to prediction
The testing result of correlation filtering carries out the flow chart of Bayes's verification process.
Fig. 6 shows a schematical prior probability.
Fig. 7 shows the functional configuration block diagram according to the object tracking device of the embodiment of the present disclosure.
Fig. 8 is shown to be set for realizing according to the calculating of the example object tracking equipment of the embodiment of the present disclosure
Standby block diagram.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present disclosure, the technical scheme in the embodiment of the present disclosure is carried out
Clearly and completely describing, it is clear that described embodiment is only disclosure part of the embodiment, and
The embodiment being not all of.Based on the embodiment in the disclosure, those of ordinary skill in the art are not doing
Go out under the premise of creative work the every other embodiment obtained, belong to the scope of disclosure protection.
As it was previously stated, in dense environment, when destination object be blocked and have beside the destination object with
During close other objects of its outward appearance, KCF detections may make a mistake, so as to cause tracking to fail.
Therefore, in the disclosure, when destination object blocks, the position detected by KCF is not used
As tracking result, but the position to being gone out according to the historical movement information prediction of destination object and KCF
Testing result carries out Bayes (Bayes) checking, and using Bayesian decision result as tracking result, thus
So that more reliable object's position can also be obtained by being blocked even if destination object.
The method for tracing object according to the embodiment of the present disclosure is described below with reference to Fig. 2.Fig. 2 is shown
According to the flow chart of the method for tracing object of the embodiment of the present disclosure.
As shown in Fig. 2 in step S210, according to the historical movement information prediction of the object object current
The movement velocity vector of position and the opening position in two field picture.
In this step can be by this areas such as Kalman (Kalman) filtering, particle filter
Any appropriate method carries out the prediction.For convenience of description, hereinafter with using Kalman
Filter exemplified by being predicted to illustrate.
The motion model of object is modeled using Kalman filter.It is well known that Kalman
The operation principle of wave filter can be represented by following equation:
X′(t|t-1)=P(t-1|t-1)*X(t-1|t-1) (1)
Pt|t-1=Pt-1|t-1+Q (2)
Kgt=Pt|t-1*H+(H*Pt|t-1+R) (3)
Xt=X 't|t-1+Kgt*(H*Yt-H*X′t|t-1) (4)
Pt=(I-Kgt)*H*Pt|t-1 (5)
Wherein, X represents system mode value, X=[x y vx vy] ', wherein [x, y] ' be object in picture frame
Position, [vx vy] ' be object [x, y] in the picture ' places movement velocity vector, Y represent measured value,
KgtIt is Error Gain, subscript t, what t-1 was represented is (it is current to assume that moment t corresponds in the disclosure at the moment
Two field picture, t-1 correspond to previous frame image);P represents state-transition matrix, and H is observation model, Q and
R is the variance of systematic procedure noise and the variance of measurement noise respectively, P, H, and Q and R initial value can
Rule of thumb to be set according to specific application scenarios, such as a kind of non-limiting example, P,
H, Q and R can be set as follows:
Above equation (1) describes the prediction process of Kalman filter, X 't|t-1Represent in moment t
Prediction system mode value;Equation (2)-(5) describe the renewal process of Kalman filter, XtRepresent
Moment t estimation is state value.
It should be noted that the current frame image described in the step is the picture frame for including tracked object
Picture frame in sequence in addition to the first frame.And in the first two field picture in described image frame sequence, can
To determine the initial position of the object by any method for checking object or by being manually specified.Separately
Outside, for the ease of description, in the present embodiment, it is right to represent this with the boundary rectangle frame of object in picture frame
As, and position coordinates of the coordinate of rectangle frame central point as object.For example, it is assumed that Fig. 1 leftmost sides
Figure shown in rectangle frame be the object determined in the first two field picture in picture frame sequence initial bit
Put.
In addition, the picture frame sequence comprising tracked object can be by camera shoot it is obtaining,
It can also sense to obtain by various sensors.
In step S220, core correlation filtering is carried out along movement velocity vector, with detection object in present frame figure
Position as in.
As it was previously stated, core correlation filtering KCF is a kind of monotrack method, its specific description can be with
With reference toF.Henriques, Rui Caseiro, Pedro Martins and Jorge Batista are in IEEE
Transactions on Pattern Analysis and Machine Intelligence, on November 5th, 2014 are sent out
The article " High-Speed Tracking with Kernelized Correlation Filters " of table, herein should
Article is incorporated by the application as reference.In order to help to understand, KCF methods are carried out below simple
Description.On the whole, KCF operation principle can be represented by following expression:
Wherein,Represent element dot product, ytOne or more of picture frame sample, ^ represent from
Dissipate Fourier transform, F-1Represent inverse discrete Fourier transformer inverse-discrete, KCF testing results be detect it is multiple
The confidence level of candidate value and each candidate value.
Wherein, z is training sample, and b is the object function returned, and λ is constant;K is nuclear matrix, square
Array element element formula specific as follows:
Wherein N, N ' represent any two vector, PCFor transposed matrix.
In step S220, KCF inspections are carried out along the movement velocity vector predicted in step S210
Survey.The exemplary process of the step is described below with reference to Fig. 3.
As shown in figure 3, in step S2201, at a predetermined interval along the movement velocity vector extract to
A few sample yt。
Movement velocity vector can represent the direction of motion of object, in this step will be along the direction of motion
To extract sample (rectangle frame for representing object), and using the point on movement velocity vector as rectangle
The central point of frame.The predetermined space can arbitrarily be set, and as an example, the predetermined space can
To be set as representing the 1/2 of the width of the rectangle frame of object.
In step S2202, for the sample ytCore correlation filtering KCF is carried out, to determine
State the confidence level of multiple position candidates and each position candidate of the object in current frame image.
In this step, each sample y is directed to as shown in above expression formula (6)-(9)tCarry out KCF,
It is derived from KCF testing results, i.e. multiple position candidates and each candidate of the object in current frame image
The confidence level of position.
Fig. 4 shows the confidence level figure for representing that the exemplary detection result obtained is detected by KCF.Such as figure
Shown in 4, each position candidate that the positional representation where each pixel in the dotted line frame in figure detects,
The value of wherein each pixel represents the confidence value at the position candidate where the pixel, and color is more deeply felt
Show that the confidence level of the position candidate is higher, and color is more shallow, represents that the confidence level of the position candidate is lower.
In step S2203, confidence level highest position candidate conduct in the multiple position candidate is selected
Position of the object detected in current frame image.
Confidence level is more high, represent the position candidate be the possibility of position of the object in current frame image more
Greatly.Thus, in this step, confidence level highest position candidate is selected to work as the object detected
Position in prior image frame.
Processing of the combined Fig. 4 to step S220 is described above.Optionally, can be to step
Confidence level highest position candidate selected by S2203 is further processed, and by the further place
Position of the result as object in current frame image after reason.Specifically, can be by confidence level highest
Measured value Y of the position candidate as present frame, Kalman filter is carried out more as shown in equation (4)
Newly, and by the position [x, y] ' in the system state vector X of thus obtained present frame as object working as
Position in prior image frame.By above-mentioned processing, it can make it that tracking result is more smooth.
In step S230, judge whether the object is blocked in the current frame.
It can judge whether object is blocked using various appropriate methods in this step.For example, make
For a kind of basic method, can by detect prospect in present frame whether tail off judge whether there occurs
Block.In the present embodiment, as a kind of example, by peak sidelobe ratio (Peak-to-Sidelobe Ration,
PSR) judge whether object is blocked.
Specifically, multiple candidate bits according to the object detected in step S220 in current frame image
Put and the confidence level of each position candidate, calculating represent the peak side-lobe of the confidence level figure of the testing result
Than;If the peak sidelobe ratio is more than predetermined threshold, it is determined that the object is blocked in the current frame,
Otherwise determine that the object is not blocked in the current frame.
Peak sidelobe ratio is image processing meanses commonly used in the art, and it can be such as following equation (10)
To calculate:
Wherein, gmaxIt is the maximum in confidence level figure, uslAnd σslIt is the secondary lobe in confidence level figure respectively
Partial average and variance.Secondary lobe part be in confidence level figure remove peak fractions remainder, peak value
Part then refers to centered on maximum, energy summation accounts for the predetermined ratio (such as 80%) of global energy
Part.
In step S240, if the object is not blocked in the current frame, the institute's rheme that will detect
Put the position in current frame image as the object;Otherwise, the position to prediction and progress nuclear phase close filter
The testing result of ripple carries out Bayes's checking, and using Bayesian decision result as object in the current frame
Position.
In this step, handled accordingly according to step S230 judged result.
If step S230 judges that object is not blocked in present frame, then it is assumed that KCF detection result be
It is believable, thus using the position detected in step S220 based on KCF as object in current frame image
In position.
It should be noted that in the case that object is not blocked in the current frame, except directly KCF is examined
The position measured, can be further to Kalman filter as object outside position in current frame image
It is updated with the core correlation filter for carrying out KCF.
Specifically, on the one hand, Kalman filter can be updated as shown in equation (2)-(5), thus
The model parameter and the system mode value of present frame updated, so as to be obtained in the tracking of subsequent frame
Obtain accurate tracking result.
On the other hand, can Kalman filter is updated to obtain by equation (4) it is current
Position [x, y] ' place in the system state vector X of frame extracts a sample, and utilizes the sampling sample
This and equation (7)-(8) training obtain new core correlation filter, are then shown below and update nuclear phase pass
Wave filter:
Wherein, η is turnover rate, in this example η=0.1.
If step S230 judges that object blocks in present frame, because KCF tracking can as previously described
It can make a mistake, therefore not use the position that KCF is detected in this case as tracking result.Separately
Outside, although the position of object can be predicted using such as Kalman filter when destination object is blocked,
But in the application of real scene, the motion of object is variable, be not in strict conformity with linear model,
Therefore it is predicted using linear models such as Kalman filters and has error;And
Kalman filter has noise in itself, so predicted value also contains noise.It is so simple Kalman
Prediction is easy to produce mistake as tracking result.Therefore, in the present embodiment, if it is determined that present frame
Middle object blocks, then carries out pattra leaves to the step S210 positions predicted and KCF testing result
This checking, and using Bayesian decision result as tracking result.
Bayes's checking is image processing meanses commonly used in the art, below with reference to Fig. 5 to the step
Bayes's verification process is described in S240.Fig. 5 is shown closes filter to the position of prediction and progress nuclear phase
The testing result of ripple carries out the exemplary process diagram of Bayes's verification process.
As shown in figure 5, in step S2401, the prior probability P (x for verifying Bayest) submit to predict
The Gaussian function for the position arrived.
In this step, using wide variety of Gaussian function, the wherein average of Gaussian function is to predict
Positional information, be shown below:
P(xt)~N (x 't|t-1,σ) (13)
Wherein, x 't|t-1It is the prediction result of t (i.e. current frame image).It should be noted that work as
When being predicted using Kalman filter, because the prediction result of Kalman filter includes object current
The movement velocity vector of position and the opening position in two field picture, therefore N (x 't|t-1, σ) and it is directive
Two-dimensional Gaussian function, its direction are consistent with the movement velocity vector predicted.For example, Fig. 6 shows one
Schematical prior probability, wherein straight line with the arrow represents the movement velocity vector of object.
It is in step S2402, the testing result for carrying out core correlation filtering is general as the condition that Bayes verifies
Rate.
In this step, it is shown below, by KCF testing results, (i.e. object is in current frame image
The confidence level of multiple position candidates and each position candidate) as Bayes checking conditional probability.
P(xt|yt)=KCF testing results (14)
Wherein, shown in KCF testing results such as expression formula (6).
In step S2403, posterior probability, the posterior probability and the prior probability and conditional probability are calculated
Product it is directly proportional, be shown below.
P(yt|xt)∝P(xt)*P(xt|yt) (15)
In step S2404, the position of maximum a posteriori probability is selected as Bayesian decision result.
yb=arg max P (yt|xt) (16)
Wherein, ybFor the position with maximum a posteriori probability, i.e. Bayesian decision result, that is, object exists
Position in present frame.
The method for tracing object according to the embodiment of the present disclosure is described combined accompanying drawing above.Should
Method for tracing object carries out Object tracking using KCF, and when object is blocked using the history of object
The position of the motion information prediction object is simultaneously modified by Bayes's checking to the position of prediction, so as to
So that more reliable object's position can also be obtained by being blocked even if object, the accurate of tracking is which thereby enhanced
Property, enabling object is tracked for a long time.
Object tracking device 700 below with reference to Fig. 7 descriptions according to the embodiment of the present disclosure.Fig. 7 is shown
According to the functional configuration block diagram of the object tracking device of the embodiment of the present disclosure.As shown in fig. 7, Object tracking
Equipment 700 can include:Predicting unit 710, detection unit 720, judging unit 730, and determine
Unit 740.The concrete function of each unit and operation with described above for Fig. 2-6 it is essentially identical,
Therefore in order to avoid repeating, brief description is hereinafter only carried out to the equipment, and is omitted to identical
The detailed description of details.
Predicting unit 710 is configured to according to the historical movement information prediction of the object object in current frame image
In position and the opening position movement velocity vector.The predicting unit 710 can pass through such as Kalman
(Kalman) any appropriate method carries out the prediction in this area such as filtering, particle filter.For
It is easy to illustrate, is hereinafter illustrated exemplified by being predicted using Kalman filter.Utilize
The operation principle that Kalman filter is modeled and is predicted to the motion model of object is as hereinbefore
Equation (1)-(5) shown in, here is omitted.
It should be noted that current frame image described herein is the picture frame sequence for including tracked object
In picture frame in addition to the first frame.And in the first two field picture in described image frame sequence, Ke Yitong
Cross any method for checking object or the initial position of the object is determined by being manually specified.In addition,
For the ease of description, in the present embodiment, the object is represented with the boundary rectangle frame of object in picture frame,
And position coordinates of the coordinate of rectangle frame central point as object.
Detection unit 720 is configured to carry out core correlation filtering along the movement velocity vector, with described in detection
Position of the object in current frame image.
As it was previously stated, core correlation filtering KCF is a kind of monotrack method, its operation principle can pass through
Expression formula (6)-(8) hereinbefore represent that here is omitted
Detection unit 720 carries out KCF detections along the movement velocity vector predicted by predicting unit 710.
Specifically, detection unit can include sampling subelement, KCF detection sub-units and selection subelement.
The sampling subelement is configured to the movement velocity vector predicted at a predetermined interval along predicting unit 710
Extract at least one sample.Movement velocity vector can represent the direction of motion of object, herein, take
Sub-unit will extract sample (rectangle frame for representing object) along the direction of motion, and will fortune
Central point of the point as rectangle frame in dynamic velocity.The predetermined space can arbitrarily be set, as
One example, the predetermined space can be set as representing the 1/2 of the width of the rectangle frame of object.
KCF detection sub-units are configured to carry out KCF for the sample, to determine object current
The confidence level of multiple position candidates and each position candidate in two field picture.Specifically, KCF detection are single
Member carries out KCF as shown in above expression formula (6)-(9) for each sample, is derived from KCF detections
As a result, i.e. the confidence level of multiple position candidates and each position candidate of the object in current frame image.
Selection subelement is configured to select confidence level highest position candidate conduct in the multiple position candidate
Position of the object detected in current frame image.Confidence level is more high, represents that the position candidate is
The possibility of position of the object in current frame image is bigger.Herein, subelement selection confidence level is selected most
Position of the high position candidate as the object detected in current frame image.
Optionally, selection subelement can enter traveling one to the confidence level highest position candidate selected by it
The processing of step, and using position of the result after the further processing as object in current frame image.Tool
Body, measured value Y that can be using confidence level highest position candidate as present frame, such as equation (4) institute
Show and Kalman filter is updated, and by the system state vector X of thus obtained present frame
Position of the position [x, y] ' as object in current frame image.By above-mentioned processing, can cause with
Track result is more smooth.
Judging unit 730 is configured to judge whether the object is blocked in the current frame.Judging unit 730
It can judge whether object is blocked using various appropriate methods.For example, as a kind of basic side
Method, it can be by detecting whether prospect in present frame tails off to judge whether to be blocked.In this reality
Apply in example, as a kind of example, judging unit 730 by peak sidelobe ratio (Peak-to-Sidelobe Ration,
PSR) judge whether object is blocked.
Specifically, the object that judging unit 730 detects according to detection unit 720 is in current frame image
Multiple position candidates and each position candidate confidence level, calculate the confidence level for representing the testing result
The peak sidelobe ratio of figure;If the peak sidelobe ratio is more than predetermined threshold, it is determined that described in the current frame
Object is blocked, and otherwise determines that the object is not blocked in the current frame.Peak sidelobe ratio is this area
In the image processing meanses commonly used, and hereinbefore have been described, here is omitted.
It is not blocked if determining unit 740 is configured to the object in the current frame, by what is detected
Position of the position as the object in current frame image;Otherwise, the position to prediction and progress core
The testing result of correlation filtering carries out Bayes's checking, and using Bayesian decision result as object current
Position in frame.
Determining unit 740 is handled accordingly according to the judged result of judging unit 730.
Specifically, if it is determined that unit 730 judges that object is not blocked in present frame, then it is assumed that KCF is examined
The position measured is believable, therefore regard detection unit 720 as object by the use of the position that KCF is detected
Position in current frame image.
It should be noted that in the case that object is not blocked in the current frame, except directly KCF is examined
As object outside position in current frame image, determining unit 740 can be further right for the position measured
Kalman filter and progress KCF core correlation filter are updated.
Specifically, on the one hand, determining unit 740 can update Kalman as shown in equation (2)-(5)
The system mode value of wave filter, the model parameter thus updated and present frame, so as in subsequent frame
Accurate tracking result can be obtained in tracking.On the other hand, determining unit 740 can pass through
Position in the system state vector X for the present frame that equation (4) is updated to obtain to Kalman filter
Put [x, y] ' place extract a sample, and using the sample and equation (7)-(8) training obtain it is new
Core correlation filter, the core correlation filter is then updated as shown in equation (11)-(12)
On the other hand, if it is determined that unit 730 judges that object blocks in present frame, by such as preceding institute
Stating KCF tracking may make a mistake, therefore the position for not using KCF to detect in this case is made
For tracking result.In addition, although can be using such as Kalman filter come pre- when destination object is blocked
The position of object is surveyed, but simple Kalman is predicted is easy to produce mistake as tracking result.Cause
This, in the present embodiment, if it is determined that object blocks in present frame, it is determined that unit 740 is to pre-
The position measured and KCF testing result carry out Bayes's checking, and using Bayesian decision result as with
Track result.
Bayes's checking is image processing meanses commonly used in the art, and determining unit 740 can be further
Including priori subelement, condition subelement, posteriority subelement and decision-making subelement, and it is carried out as follows shellfish
Ye Si is verified.
Priori subelement makes the prior probability P (x that Bayes verifiest) submit to the Gauss of position that predicts
Function.As an example, priori subelement uses directive two-dimensional Gaussian function, wherein two dimension height
The average of this function is the positional information predicted, and its direction is consistent with the movement velocity vector predicted.
The conditional probability that condition subelement verifies the testing result for carrying out KCF as Bayes.Specifically,
Condition subelement is by KCF testing results, i.e. multiple position candidates of the object in current frame image and each
The confidence level of position candidate, the conditional probability as Bayes's checking.
Posteriority subelement calculates posterior probability, and the posterior probability and the prior probability and conditional probability multiply
Product is directly proportional.
The position of maximum a posteriori probability in the posterior probability that decision-making subelement selection posteriority subelement is calculated
Put as Bayesian decision result, the i.e. position of object in the current frame.
Above the object tracking device 700 according to the embodiment of the present disclosure is described by reference to Fig. 7.This is right
Image tracing equipment 700 is when object is blocked using the position of the historical movement information prediction of the object object
And the position of prediction is modified by Bayes's checking, so that being blocked even if object can also obtain
More reliable object's position is obtained, which thereby enhances the accuracy of tracking, enabling for a long time to right
As being tracked.
Below, reference picture 8 describes can be used for realize the example object tracking equipment of the embodiment of the present disclosure
Computing device block diagram.
As shown in figure 8, computing device 800 include one or more processors 802, storage device 804,
Camera 806 and output device 808, the company that these components pass through bus system 810 and/or other forms
Connection mechanism (not shown) interconnects.It should be noted that the component and structure of the computing device 800 shown in Fig. 8
Simply illustrative, and not restrictive, as needed, computing device 800 can also have other groups
Part and structure.
Processor 802 can be CPU (CPU) or have data-handling capacity and/or refer to
The processing unit of the other forms of executive capability is made, and other groups in computing device 800 can be controlled
Part is to perform desired function.
Storage device 804 can include one or more computer program products, the computer program production
Product can include various forms of computer-readable recording mediums, such as volatile memory and/or non-volatile
Property memory.The volatile memory can for example include random access memory (RAM) and/or height
Fast buffer storage (cache) etc..The nonvolatile memory can for example include read-only storage
(ROM), hard disk, flash memory etc..One or more can be stored on the computer-readable recording medium
Individual computer program instructions, processor 802 can run described program instruction, described above to realize
The function of embodiment of the disclosure and/or other desired functions.In the computer-readable storage medium
Various application programs and various data, such as the historical movement information of object, prediction can also be stored in matter
Position and the opening position movement velocity vector, extraction sample, KCF testing result, put
The peak sidelobe ratio of reliability figure, prior probability, posterior probability, conditional probability, each predetermined threshold etc..
Camera 806, which is used to shoot, includes the picture frame sequence of destination object, and by captured each frame
Image is stored in storage device 804 so that other components use.It is of course also possible to using outside other
Equipment shoots described image frame sequence, and each two field picture of shooting is sent into computing device 800.
In this case, it is convenient to omit camera 806.
Output device 808 can export various information to outside, such as destination object is in current frame image
The tracking result such as position, and the various display devices such as display, projecting apparatus, TV can be included.
The general principle of the disclosure is described above in association with specific embodiment, however, it is desirable to, it is noted that
The advantages of referring in the disclosure, advantage, effect etc. are only exemplary rather than limiting, it is impossible to think that these are excellent
Point, advantage, effect etc. are that each embodiment of the disclosure is prerequisite.In addition, tool disclosed above
Body details is and unrestricted merely to the effect of example and the effect readily appreciated, and above-mentioned details is simultaneously unlimited
The disclosure processed is that must be realized using above-mentioned concrete details.
The device that is related in the disclosure, device, equipment, the block diagram only illustratively example of property of system
And it is not intended to require or implies to be attached in the way of square frame illustrates, arrange, configure.
As the skilled person will recognize, can connect, arrange by any-mode, configure these devices,
Device, equipment, system.The word of such as " comprising ", "comprising", " having " etc. is open vocabulary,
Refer to " including but is not limited to ", and can be with its used interchangeably.Vocabulary "or" used herein above and " and " refer to vocabulary
"and/or", and can be with its used interchangeably, unless it is not such that context, which is explicitly indicated,.It is used herein above
Vocabulary " such as " refers to phrase " such as, but not limited to ", and can be with its used interchangeably.
Step flow chart and the above method description only illustratively example of property and unawareness in the disclosure
Figure requires or implied that some steps can be simultaneously the step of must carrying out each embodiment according to the order provided
Row, perform independently of one another or according to other appropriate orders.In addition, such as " thereafter ", " then ", " connect
Get off " etc. word be not intended to limit the order of step;These words are only used for guiding reader to read over these
The description of method.
It may also be noted that in the apparatus and method of the disclosure, each part or each step are to divide
Solve and/or reconfigure.These decompose and/or reconfigured the equivalents that should be regarded as the disclosure.
The above description of disclosed aspect is provided so that any person skilled in the art can make or
Use the disclosure.It is very aobvious and easy to those skilled in the art to the various modifications in terms of these
See, and General Principle defined herein can apply to other aspects without departing from the scope of the present disclosure.
Therefore, the disclosure is not intended to be limited to the aspect being shown in which, but according to principle disclosed herein
The widest range consistent with the feature of novelty.Although already discussed above multiple exemplary aspects and embodiment,
But it would be recognized by those skilled in the art that its some modifications, modification, change, addition and sub-portfolio.
Claims (10)
1. a kind of method for tracing object, including:
According to position in current frame image of the historical movement information prediction of the object object and the opening position
Movement velocity vector;
Core correlation filtering is carried out along the movement velocity vector, to detect the object in current frame image
Position;
Judge whether the object is blocked in the current frame;With
If the object is not blocked in the current frame, using the position detected as the object
Position in current frame image;Otherwise, the testing result of the position to prediction and progress core correlation filtering
Bayes's checking is carried out, and using position of the Bayesian decision result as object in the current frame.
2. method for tracing object as claimed in claim 1, wherein the current frame image is comprising described
Picture frame in the picture frame sequence of object in addition to the first frame.
3. method for tracing object as claimed in claim 1, wherein carrying out core along the movement velocity vector
Correlation filtering is further comprised with detecting position of the object in current frame image:
At a predetermined interval at least one sample is extracted along the movement velocity vector;
Core correlation filtering is carried out for the sample, to determine the object in current frame image
The confidence level of multiple position candidates and each position candidate;
Confidence level highest position candidate in the multiple position candidate is selected to work as the object detected
Position in prior image frame.
4. method for tracing object as claimed in claim 3, wherein the judgement is described right in the current frame
As if no be blocked further comprises:
Calculate the peak sidelobe ratio of the confidence level figure for the confidence level for representing each position candidate;
If the peak sidelobe ratio is more than predetermined threshold, it is determined that the object is blocked in the current frame,
Otherwise determine that the object is not blocked in the current frame.
5. method for tracing object as claimed in claim 1, wherein position and progress nuclear phase pass to prediction
The testing result of filtering carries out Bayes's checking, and using Bayesian decision result as object in the current frame
Position further comprise:
The prior probability for verifying Bayes submits to the Gaussian function of the position predicted;
The conditional probability that the testing result for carrying out core correlation filtering is verified as Bayes;
Posterior probability is calculated, the posterior probability is directly proportional to the product of the prior probability and conditional probability;
The position of maximum a posteriori probability is selected as Bayesian decision result.
6. method for tracing object as claimed in claim 1, wherein predicting that this is right by Kalman filtering
As the position in current frame image and the movement velocity vector of the opening position, and the method for tracing object
Also include:
If the object is not blocked in the current frame, Kalman filter is updated.
7. method for tracing object as claimed in claim 6, in addition to:
If the object is not blocked in the current frame, updating what the Kalman filter obtained
A sample is extracted at estimated location, and the core correlation filtering is carried out using sample renewal
Core correlation filter.
8. method for tracing object as claimed in claim 6, in addition to:
In the first two field picture in the picture frame sequence comprising the object, detection or the specified object
Initial position.
9. a kind of object tracking device, including:
Predicting unit, it is configured to according to the historical movement information prediction of the object object in current frame image
Position and the opening position movement velocity vector;
Detection unit, it is configured to carry out core correlation filtering along the movement velocity vector, it is described right to detect
As the position in current frame image;
Judging unit, it is configured to judge whether the object is blocked in the current frame;
Determining unit, it is not blocked if being configured to the object in the current frame, the institute that will be detected
Rheme puts the position in current frame image as the object;Otherwise, the position to prediction and progress nuclear phase
Close filtering testing result carry out Bayes's checking, and using Bayesian decision result as object in present frame
In position.
10. a kind of object tracking device, including:
Processor;
Memory;With
The computer program instructions being stored in the memory, the computer program instructions are described
Processor performs following steps when running:
According to position in current frame image of the historical movement information prediction of the object object and the opening position
Movement velocity vector;
Core correlation filtering is carried out along the movement velocity vector, to detect the object in current frame image
Position;
Judge whether the object is blocked in the current frame;
If the object is not blocked in the current frame, using the position detected as the object
Position in current frame image;Otherwise, the testing result of the position to prediction and progress core correlation filtering
Bayes's checking is carried out, and using position of the Bayesian decision result as object in the current frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610295085.XA CN107346538A (en) | 2016-05-06 | 2016-05-06 | Method for tracing object and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610295085.XA CN107346538A (en) | 2016-05-06 | 2016-05-06 | Method for tracing object and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107346538A true CN107346538A (en) | 2017-11-14 |
Family
ID=60254291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610295085.XA Pending CN107346538A (en) | 2016-05-06 | 2016-05-06 | Method for tracing object and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107346538A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537820A (en) * | 2018-04-18 | 2018-09-14 | 清华大学 | Dynamic prediction method, system and the equipment being applicable in |
CN108805901A (en) * | 2018-05-04 | 2018-11-13 | 北京航空航天大学 | A kind of quick detecting and tracking parallel computation of sensation target based on multi-core DSP and fusion method |
CN108986138A (en) * | 2018-05-24 | 2018-12-11 | 北京飞搜科技有限公司 | Method for tracking target and equipment |
CN109902588A (en) * | 2019-01-29 | 2019-06-18 | 北京奇艺世纪科技有限公司 | A kind of gesture identification method, device and computer readable storage medium |
CN110147750A (en) * | 2019-05-13 | 2019-08-20 | 深圳先进技术研究院 | A kind of image search method based on acceleration of motion, system and electronic equipment |
CN110458861A (en) * | 2018-05-04 | 2019-11-15 | 佳能株式会社 | Object detection and tracking and equipment |
CN110751671A (en) * | 2018-07-23 | 2020-02-04 | 中国科学院长春光学精密机械与物理研究所 | Target tracking method based on kernel correlation filtering and motion estimation |
CN111428642A (en) * | 2020-03-24 | 2020-07-17 | 厦门市美亚柏科信息股份有限公司 | Multi-target tracking algorithm, electronic device and computer readable storage medium |
WO2020147348A1 (en) * | 2019-01-17 | 2020-07-23 | 北京市商汤科技开发有限公司 | Target tracking method and device, and storage medium |
CN111832343A (en) * | 2019-04-17 | 2020-10-27 | 北京京东尚科信息技术有限公司 | Eye tracking method and device and storage medium |
CN112840645A (en) * | 2018-10-10 | 2021-05-25 | 联发科技股份有限公司 | Method and apparatus for combining multiple predictors for block prediction in a video coding system |
CN116228817A (en) * | 2023-03-10 | 2023-06-06 | 东南大学 | Real-time anti-occlusion anti-jitter single target tracking method based on correlation filtering |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339655A (en) * | 2008-08-11 | 2009-01-07 | 浙江大学 | Visual sense tracking method based on target characteristic and bayesian filtering |
CN102063625A (en) * | 2010-12-10 | 2011-05-18 | 浙江大学 | Improved particle filtering method for multi-target tracking under multiple viewing angles |
CN105469430A (en) * | 2015-12-10 | 2016-04-06 | 中国石油大学(华东) | Anti-shielding tracking method of small target in large-scale scene |
-
2016
- 2016-05-06 CN CN201610295085.XA patent/CN107346538A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339655A (en) * | 2008-08-11 | 2009-01-07 | 浙江大学 | Visual sense tracking method based on target characteristic and bayesian filtering |
CN102063625A (en) * | 2010-12-10 | 2011-05-18 | 浙江大学 | Improved particle filtering method for multi-target tracking under multiple viewing angles |
CN105469430A (en) * | 2015-12-10 | 2016-04-06 | 中国石油大学(华东) | Anti-shielding tracking method of small target in large-scale scene |
Non-Patent Citations (1)
Title |
---|
LIYANG YU 等: "A Visual Tracker Based on Improved Kernel Correlation Filter", 《PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON INTERNET MULTIMEDIA COMPUTING AND SERVICE》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537820A (en) * | 2018-04-18 | 2018-09-14 | 清华大学 | Dynamic prediction method, system and the equipment being applicable in |
CN108805901A (en) * | 2018-05-04 | 2018-11-13 | 北京航空航天大学 | A kind of quick detecting and tracking parallel computation of sensation target based on multi-core DSP and fusion method |
CN108805901B (en) * | 2018-05-04 | 2022-02-22 | 北京航空航天大学 | Visual target rapid detection tracking parallel computing and fusion method based on multi-core DSP |
CN110458861B (en) * | 2018-05-04 | 2024-01-26 | 佳能株式会社 | Object detection and tracking method and device |
CN110458861A (en) * | 2018-05-04 | 2019-11-15 | 佳能株式会社 | Object detection and tracking and equipment |
CN108986138A (en) * | 2018-05-24 | 2018-12-11 | 北京飞搜科技有限公司 | Method for tracking target and equipment |
CN110751671A (en) * | 2018-07-23 | 2020-02-04 | 中国科学院长春光学精密机械与物理研究所 | Target tracking method based on kernel correlation filtering and motion estimation |
CN112840645B (en) * | 2018-10-10 | 2023-12-12 | 寰发股份有限公司 | Method and apparatus for combining multiple predictors for block prediction in a video coding system |
CN112840645A (en) * | 2018-10-10 | 2021-05-25 | 联发科技股份有限公司 | Method and apparatus for combining multiple predictors for block prediction in a video coding system |
KR20200100792A (en) * | 2019-01-17 | 2020-08-26 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Target tracking method and device, storage medium |
KR102444769B1 (en) | 2019-01-17 | 2022-09-19 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Target tracking method and device, storage medium |
TWI715320B (en) * | 2019-01-17 | 2021-01-01 | 大陸商北京市商湯科技開發有限公司 | Method, device and storage medium for target tracking |
WO2020147348A1 (en) * | 2019-01-17 | 2020-07-23 | 北京市商汤科技开发有限公司 | Target tracking method and device, and storage medium |
CN109902588A (en) * | 2019-01-29 | 2019-06-18 | 北京奇艺世纪科技有限公司 | A kind of gesture identification method, device and computer readable storage medium |
CN111832343A (en) * | 2019-04-17 | 2020-10-27 | 北京京东尚科信息技术有限公司 | Eye tracking method and device and storage medium |
CN111832343B (en) * | 2019-04-17 | 2024-04-09 | 北京京东乾石科技有限公司 | Tracking method and device, and storage medium |
CN110147750A (en) * | 2019-05-13 | 2019-08-20 | 深圳先进技术研究院 | A kind of image search method based on acceleration of motion, system and electronic equipment |
CN111428642A (en) * | 2020-03-24 | 2020-07-17 | 厦门市美亚柏科信息股份有限公司 | Multi-target tracking algorithm, electronic device and computer readable storage medium |
CN116228817A (en) * | 2023-03-10 | 2023-06-06 | 东南大学 | Real-time anti-occlusion anti-jitter single target tracking method based on correlation filtering |
CN116228817B (en) * | 2023-03-10 | 2023-10-03 | 东南大学 | Real-time anti-occlusion anti-jitter single target tracking method based on correlation filtering |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107346538A (en) | Method for tracing object and equipment | |
CN108470332B (en) | Multi-target tracking method and device | |
Hall et al. | Probabilistic object detection: Definition and evaluation | |
Zhou et al. | Visual tracking and recognition using appearance-adaptive models in particle filters | |
WO2015161776A1 (en) | Hand motion identification method and apparatus | |
JP2005235222A (en) | Tracking method of object and its device | |
Ye et al. | Faster voxelpose: Real-time 3d human pose estimation by orthographic projection | |
US9111172B2 (en) | Information processing device, information processing method, and program | |
CN103310188A (en) | Method and apparatus for pose recognition | |
Chen et al. | R-CNN-based satellite components detection in optical images | |
Soleimanitaleb et al. | Single object tracking: A survey of methods, datasets, and evaluation metrics | |
Hassan et al. | An adaptive sample count particle filter | |
CN114222986A (en) | Random trajectory prediction using social graph networks | |
Gemerek et al. | Video-guided camera control for target tracking and following | |
CN112036457A (en) | Method and device for training target detection model and target detection method and device | |
Qing et al. | A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation | |
Makris et al. | A hierarchical feature fusion framework for adaptive visual tracking | |
Badrloo et al. | A novel region-based expansion rate obstacle detection method for MAVs using a fisheye camera | |
Vafadar et al. | A vision based system for communicating in virtual reality environments by recognizing human hand gestures | |
Pei et al. | Improved Camshift object tracking algorithm in occluded scenes based on AKAZE and Kalman | |
Mörwald et al. | Advances in real-time object tracking: Extensions for robust object tracking with a Monte Carlo particle filter | |
US11176680B2 (en) | Method of tracking object and apparatuses performing the same | |
De-Maeztu et al. | A temporally consistent grid-based visual odometry framework for multi-core architectures | |
Dong et al. | Ellipse regression with predicted uncertainties for accurate multi-view 3d object estimation | |
Wang et al. | Degree of nonlinearity (DoN) measure for target tracking in videos |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171114 |
|
WD01 | Invention patent application deemed withdrawn after publication |