CN103106667A - Motion target tracing method towards shielding and scene change - Google Patents
Motion target tracing method towards shielding and scene change Download PDFInfo
- Publication number
- CN103106667A CN103106667A CN2013100397543A CN201310039754A CN103106667A CN 103106667 A CN103106667 A CN 103106667A CN 2013100397543 A CN2013100397543 A CN 2013100397543A CN 201310039754 A CN201310039754 A CN 201310039754A CN 103106667 A CN103106667 A CN 103106667A
- Authority
- CN
- China
- Prior art keywords
- tracing
- kalman filter
- target
- feature
- surf
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a motion target tracing method towards shielding and scene change. The motion target tracing method towards shielding and scene change comprises the following steps: a, conducting prospect motion detection on input video sequence, and extracting a motion target; b, if feature of a tracing target is stored, then entering step d; if feature of the tracing target is not stored, then completing formwork initialization of a target object, SURF feature extraction and initialization of a Kalman filter according to area selected by a user; c, adopting a method based on the Kalman filter to conduct prediction tracing on a motion target, and entering step e till the video content ends; when shielding occurs in the tracing process, then entering step d; d, utilizing matching method based on the SURF feature to determine the tracing target, when feature matching tends to stabilize and judge the shielding as ending, reinitializing the Kalman filter and entering step c; e, outputting and storing the target object feature information. The motion target tracing method towards shielding and scene change is a complete set of video motion target tracing method applied to a fixed background monocular camera and can be manufactured into software, and application is convenient.
Description
Technical field
The invention belongs to image processing techniques, the tracer technique field of Moving Objects.Concretely relate to be combined based on kalman filtering and SURF methods to realize quick, the accurate method for tracing blocked with Moving Objects in the case of scene change.
Background technology
Current Moving Objects in Video Sequences method for tracing has following several:
One is the tracking based on region, and it is partitioned into the object video of each frame first, the corresponding relation then set up between each cutting object, so as to realize the tracking to object video.Segmentation of this method to object video requires very high, once the Object Segmentation mistake of a certain frame or a few frames in video segment, the then tracking of whole object video will fail.
Two be Graph Cuts methods(Also known as Min-Cut/Max-Flow methods), it is a kind of classical image partition method, many image partition methods are all based on what Graph Cuts were derived at present.Because the back of the tracking of Moving Objects is generally all the extraction of sport foreground object, therefore this tracking based on region is using than wide.But due to this method can not Ground Split is mutually blocked very well object, therefore this method effect in the scene frequently occurred is blocked is poor.
Three be the tracking based on model, and the tracking for being currently based on model is broadly divided into two classes:Human body tracking i.e. based on model and the vehicle tracking based on model.Due to the characteristic of this method, after the corresponding relation of object 2D image coordinates and 3D coordinates is obtained, even if largely angular transformation occurs for object, it can also be tracked using the 3D models of object.This method requirement is first modeled to tracked object, is then matched the content in the model and video image to realize tracking;And this method requirement has enough prioris to tracked object, could set up out effective object module.
The content of the invention
The task of the present invention is that offer is a kind of towards the Moving Objects method for tracing blocked with scene change, and this method can quickly and accurately realize the tracking to special exercise object in video.
Its technical solution is:
It is a kind of towards the Moving Objects method for tracing blocked with scene change, comprise the following steps:
A carries out foreground moving detection to the video sequence of input, extracts Moving Objects;Subsequently into step b,
If b has preserved the feature of tracing object, into step d;If not preserving the feature of tracing object, the region selected according to user completes the template initialization and SURF feature extractions to target object, and Kalman filter initialization;Subsequently into step c,
C is predicted tracking to moving target using the method based on Kalman filter, until video content terminates, into step e;When being blocked in tracing process, then into step d;
D determines tracing object using the matching process based on SURF features, at the end of characteristic matching tends towards stability and judges to block, and reinitializes and enters step c after Kalman filter;
E is exported and is preserved destination object characteristic information.
In above-mentioned steps a, two reference frame I are set upbg(x,y)、Iup(x, y), Ibg(x, y) is the background frames of current scene, Iup(x, y) is a reference frame being continuously updated over time;By present frame I (x, y) respectively with Ibg(x,y)、Iup(x, y) carries out difference binaryzation, and obtained result is designated as:Fbg(x,y)、Fup(x, y), legacy and the moving object in scene are told according to the value of the two.
In above-mentioned steps c, Kalman filter is initialized first, tracking is then predicted according to the target object state observed;During tracking, template image is adaptively updated according to the profile variations situation of destination object, and representational characteristic information is preserved;In tracing process, using the determination methods intersected based on profile to whether blocking and being modeled, analyze and judge.
In above-mentioned steps d, automatic search video content simultaneously finds the prospect agglomerates most with tracked object features Point matching, for erroneous matching caused by measurement error and noise, obtain SURF characteristic matchings point rear use RANSAC algorithms are accurately matched and conversion obtaining image homography matrix, spotting object in video;Judgement block whether terminate with it is above-mentioned judge whether to block use same model;The method for reinitializing Kalman filter use is identical with the initial method of above-mentioned Kalman filter.
The present invention can have following advantageous effects:
The method that the present invention is combined using Kalman filter and SURF characteristic matchings, on the one hand in unobstructed and scene change, Kalman filter can quickly finish prediction and follow the trail of, on the other hand when blocking with scene change, the Target Tracking Problem under Kalman filter failure conditions can be effectively solved using characteristics such as the Scale invariants of SURF features;Therefore with fast and accurately advantage, and due to that can adaptively update To Template according to the change of objective contour, robustness is also fine.The present invention is integrated use legacy detection algorithm, Kalman filter, SURF features, shadowing etc., and the Moving Objects in Video Sequences method for tracing suitable for fixed background monocular-camera of the complete set put forward can be made into software, is easy to application.
Brief description of the drawings
The present invention is further described with embodiment below in conjunction with the accompanying drawings:
Fig. 1 is the foreground moving detection process schematic diagram in the present invention.
Fig. 2 is the basic Kalman filter workflow schematic diagram used in the present invention.
The metric space contrast schematic diagram that Fig. 3 builds for the SURF used in the present invention with SIFT algorithms.
Square frame filtering schematic diagrames of the Fig. 4 for the SURF algorithm used in the present invention in three directions.
Fig. 5 is FB(flow block) of the invention.
Embodiment
In order to more fully understand and realize the present invention, the technical background that the present invention is used is described below first:
First, Moving Objects detection algorithm.
1. time-domain calculus of finite differences
Time-domain difference exactly makes the difference adjacent two two field picture in video sequence, and Moving Objects are extracted by obtained pixel value difference.This method is simple and convenient, is adapted to the extraction under dynamic background.But the objective contour obtained by this method is possible and imperfect.For example when moving object move it is very slow, and itself have large area smooth region when, overlapping part cannot be obtained by so making the difference adjacent two two field picture, and obtained profile occurs in " cavity ".
A kind of current improved method is to replace two frame differences using three-frame difference, thus can preferably detect the profile of intermediate frame motion target.Three adjacent two field pictures are in setting video sequence:It-1(x,y)、It(x,y)、It+1(x, y), calculates the pixel value difference of adjacent two frame respectively:
Then binary conversion treatment is carried out to obtained error image, obtains binary image:
To b(t, t1)(x, y) and b(t+1,t)(x, y) carries out logic "and" operation, obtains bianry image Bt(x,y):
The processing such as burn into expansion finally is carried out to obtained bianry image, to eliminate noise and " cavity ".
2. optical flow method
Optical flow method detects that the general principle of moving object is:To one velocity of each pixel definition in image, if not having moving object in image, light stream vector is consecutive variations in whole image region;If there is moving object in image, there is relative motion between moving object and background, the velocity of the two is different, so as to detect moving object.Do not used because the computational methods of light stream are sufficiently complex and amount of calculation is very big, therefore typically by real-time system.
3. background modeling
Background modeling and time-domain difference method have similarity, are all the differences for doing two field pictures, the difference is that background modeling method is by present frame and reference frame(Background frames)Carry out difference operation.Background modeling method is widely used in the motion detection of static camera, the selection of background frames is the key of whole algorithm, modeling of the background modeling namely to background frames, background frames ideally should be free of moving object, and can be updated according to certain strategy with adapt to the leaf swing in the dynamic change of scene, such as illumination variation, background, ripple, sleet descends slowly and lightly situations such as.Existing background modeling method is broadly divided into six classes:Increment type Gauss is average, sequential medium filtering, mixed Gauss model, Density Estimator, order cuclear density are approximate and feature background model.
The method for testing motion that the present invention is used is exactly a kind of simple and quick method based on background modeling thought:Legacy detection method, with reference to Fig. 1.Set up two reference frame Ibg(x,y)、Iup(x,y)。Ibg(x, y) is the background frames of current scene, Iup(x, y) is a reference frame being continuously updated over time.By present frame I (x, y) respectively with Ibg(x,y)、Iup(x, y) carries out difference binaryzation, and obtained result is designated as:Fbg(x,y)、Fup(x, y) and legacy and moving object in scene are differentiated according to the value of the two.
2nd, the tracing algorithm based on Kalman filter, with reference to Fig. 2.
1. discrete Kalman filter
This be the present invention use it is unobstructed, without scene change when Moving Objects method for tracing.Basic Kalman filtering is the method for solving linear filtering and forecasting problem, with the minimum criterion of mean square error, with it is simple, quick the characteristics of.
Kalman filter predicts process:
Kalman filter renewal process:
Wherein F is state-transition matrix, and H is calculation matrix, and Z is observation, and B is Input transformation matrix, and u is input value(New input value is not needed in some systems, therefore B and u can be omitted).Q and R represent the variance of noise vector in state migration procedure and observation process respectively.Represent the k-1 moment to k moment states XkBest predictor,Represent the observation Z with the k momentkWith predicted value of the previous moment to this momentTo XkThe state made updates.P is covariance, descends target implication identical with X thereon, and K represents Kalman gains.
2. the Kalman filter of extension
For the second best measure of Nonlinear state space model, the class method being most widely used is the Kalman filter of extension(Extended Kalman filtering, EKF).EKF basic ideas are first to linearize nonlinear system, and the processing similar with linear Kalman filter device is then carried out again.Its specific method is that the Taylor expansion of nonlinear function is blocked, so that nonlinear function be linearized.What is carried out according to Taylor expansion is single order or second order interception, and EKF can be divided mainly into single order EKF(first order EKF)With second order EKF(second order EKF).
Although the Kalman filter of extension has outstanding performance in terms of nonlinear model is solved, there is also clearly disadvantageous in actual applications for it:One is that unstable filtering may be produced during nonlinear model is linearized;Two be, when calculating the derivative of Jacobian matrix, to realize complex;Three be the situation that pattern function in reality there may be non-differentiability, so may result in EKF failures.Therefore when the noise in the non-linear relatively strong and system of model is non-gaussian distribution, EKF estimated accuracy will be substantially reduced, and ultimately result in failure.
3rd, SIFT, SURF characteristic matching method, with reference to Fig. 3, the left figure in figure is the image pyramid built in conventional method, and last layer image is the down-sampling to previous tomographic image;Right figure is the method for structure metric space in SURF algorithm, and image is constant, the size of the simply Filtering Template of change.
SURF algorithm is SIFT innovatory algorithm.SURF algorithm will be far superior to SIFT algorithms in characteristic matching speed, therefore SURF algorithm can be further applicable in real-time images match scene.The present invention carries out Feature Points Matching using SURF algorithm.
SURF algorithm is divided into metric space structure, feature point detection, feature descriptor generation and four parts of Feature Points Matching.
1. metric space is built
The metric space of SURF algorithm is constituted
Traditional metric space is described as a pyramid, and Gaussian convolution core is the unique linear core for realizing change of scale, gives piece image I (x, y), then its metric space is defined as:
L(x,y,δ)=G(x,y,δ)*I(x,y) (1)
Wherein G (x, y) is changeable scale Gaussian function,
(x, y) is space coordinate, and δ is yardstick coordinate, and δ size determines the smoothness of image.Using above formula, the number of plies for including image in the exponent number in pyramid scale space, and every rank pyramid is finally determined according to the size of image.The pyramidal first layer of first rank is original image, and each layer up is to carry out Laplace conversion to preceding layer afterwards(Gaussian convolution, δ values gradually increase).From the point of view of intuitively, image more up is fuzzyyer.
In order to effectively detect stable key point in metric space, Lowe et al. proposes Gaussian difference scale space(DOG scale-space).
D(x,y,δ)=(G(x,y,kδ)-G(x,y,δ))*I(x,y)=L(x,y,kδ)-L(x,y,δ) (3)
Each layer of DoG pyramid metric spaces is subtracted each other by gaussian pyramid adjacent two layers to be obtained, therefore DoG pyramids, compared with Gauss pyramid, the exponent number of tower is identical, but subtracts one per the number of plies in rank tower.
The metric space of SIFT algorithms is constituted
The method that SIFT algorithms build metric space, its shortcoming is that the foundation of every tomographic image will depend on previous tomographic image, and the size of image needs to reset, therefore the operand of this method is larger.
What SURF algorithm changed when building image pyramid is not picture size, but the size of Filtering Template.SURF algorithm can be with parallel processing when building metric space, and need not carry out double sampling to image, so as to improve arithmetic speed.The difference of SURF algorithm and the metric space of SIFT algorithm constructions is as shown in Figure 3.
2. feature point detection
Piece image I (x, y) is given, its integral image is
SURF feature point detections judge whether certain point is extreme point on image using Hessian determinants of a matrix.If f (x, y) is the function that a second order can be micro-, then its Hessian matrix is
The then determinant of matrix H
It is the product of H characteristic value, if det H<0, then point (x, y) is not Local Extremum;If det H>0, then point (x, y) is Local Extremum.Then Hessian matrixes of the image I (x, y) under yardstick δ is
Wherein Lxx(x, y, δ) is image at point (x, y) place and Gaussian function second order local derviationConvolution, similarly definable Lxy(x, y, δ) and Lyy(x,y,δ)。
Bay et al. carries out rational discretization to convolution kernel and replaced after cutting with square frame Filtering Template, and accelerates convolution speed to reduce amount of calculation using integral image.Three convolution kernels are respectively D after cuttingxx、DyyAnd Dxy, they are Lxx、LyyAnd LxySimplify expression.9 × 9 square frame Filtering Template is as shown in figure 4, corresponding second order Gauss filter scale factor delta=1.2.
Because square frame filtering is the approximate evaluation of second order Gauss filtering, therefore the error for replacing second order Gauss filtering calculating Hessian matrix determinants value and causing is filtered with square frame in order to make up, then had
detH=DxxDyy-(0.9Dxy)2 (8)
The length of side for 9 convolution kernel be smallest dimension convolution kernel, with being continuously increased for yardstick, the size of convolution kernel (Filtering Template) also proportional increase.If Filtering Template size is N*N, yardstick δ=N*1.2/9 is corresponded to.
After the extreme point that each yardstick is obtained by Hessian matrix determinants, with 8 consecutive points of its same yardstick and above and below it, each 9 points of two yardsticks are compared each extreme point, when maximum or minimum during the value of the extreme point is worth at 26, just the extreme point is regard as candidate feature point.
The method being finally previously mentioned with M.Brown carries out interpolation arithmetic, obtains the characteristic point position and place scale-value of sub-pixel precision.Remove the low characteristic point of contrast and unstable skirt response point simultaneously(Because DoG methods can produce stronger skirt response), to improve the stability of noise resisting ability and enhancing matching.
Because characteristic point is the local stability point spatially chosen in graphical rule, so meeting the requirement of characteristic matching in the case of dimensional variation.
Why higher than the operation efficiency of SIFT algorithm SURF algorithm is, is because SURF algorithm accelerates the detection of characteristic point using integral image and Hessian matrixes.
3.SURF feature descriptors are generated
The generation of SURF Feature Descriptors is divided into two steps:Principal direction determines and built descriptor.
Principal direction is determined.To ensure the rotational invariance of characteristic point, 6 δ around characteristic point(δ is characterized a place yardstick)Neighborhood in pixel point sampling, and in sample point ask for the Harr wavelet convolutions core response in x and y direction that the length of side is 4 δ by step-length δ.To make the response contribution close to characteristic point big, the response contribution away from characteristic point is small, carries out Gauss weighting by the δ of σ=2 to small echo response, is then expressed as the point on a two-dimensional coordinate, can finally obtain all sampled points and respond the distribution map on two dimensional surface.Then slided with a subtended angle for 60 ° of sliding window by fixed step size, be added to form new vector by the response in the range of 60 ° every time, the direction of selection most long vector is the principal direction of this characteristic point
Build descriptor.Then centered on characteristic point, reference axis is rotated into principal direction.The square area that the length of side is 20 δ is chosen, the window is divided into 4 × 4 subregion, the Harr responses of 25 sampled points are calculated in each sub-regions, d is designated as respectivelyxAnd dy.Then d is obtained with the δ of σ=3.3 Gaussian function weighting on each subwindowxAnd dyAccumulated value, i.e. ∑ dx, ∑ dy, ∑ │ dx│, ∑ │ dy│.One four-dimensional new vector of formation per sub-regions, because square area includes 16 subwindows, then the descriptor that each characteristic point formation 16 × 4=64 is tieed up.
The method alignd by principal direction, SURF features have rotational invariance, available for the characteristic matching under rotational case.
4.SURF Feature Points Matchings
Similarity determination measurement of the matching of characteristic point by the use of the Euclidean distance of characteristic vector as key point in two images.The formula of Euclidean distance is:
d=sqrt(∑(xi1-xi2)2) (9)
xi1Represent the i-th dimension coordinate of certain point on piece image, xi2Represent the i-th dimension coordinate of certain point on the second width image.
Specific practice is:Some key point in piece image is taken, the first two key point nearest with its Euclidean distance in the second width image is found out.If nearest distance divided by secondary near distance are less than some threshold value in the two key points, then it is assumed that find a pair of match points.Increase this threshold value, the number of match point will increase, but accuracy rate declines;This threshold value is conversely reduced, the number of match point will be reduced, but accuracy rate can be improved.
Deficiency of the 5.SURF algorithms in Moving Objects tracking
Although SIFT/SURF features have relatively stable consistency to the scaling of image, rotation, translation, brightness change etc., it is applied and still there is deficiency in Moving Objects tracking, mainly there is the following aspects:
1)Due to using the method based on image pyramid when building metric space, it is thus possible to occur that layer obtains defective tightness to cause yardstick matching to have the situation of error.When original image is smaller in itself, building extraction of the metric space on characteristic point influences little.
2)The low point of contrast and unstable skirt response point can be removed in SURF characteristic point filter processes.If there is large stretch of smooth region in picture material, so characteristic information of this partial content will be filtered.It is able to may also be equally omitted as the marginal information of key character.
3)SURF features are the local features of picture material, have ignored the global information of image in itself.
4)The search strategy that SURF algorithm is used in Feature Points Matching is inefficient, and can not make full use of the position relationship between adjacent features point, consequently, it is possible to causing erroneous matching.
5)SIFT/SURF algorithms only make use of the gamma characteristic of image in itself, not account for image intrinsic colour information.
6)Compared with kalman filter-tracking technologies, the operand of SURF algorithm wants greatly many;The simple tracking for being used for Moving Objects in Video Sequences using SURF algorithm is difficult the requirement for meeting real-time.
4th, homography matrix.
When carrying out destination object positioning, the concept of homography matrix has been used.Two images AB in the same space, if from there are one-to-one mapping relations image A to image B, the mapping relations represent to be exactly a homography matrix with matrix.If the point coordinates of certain in two images is respectively I (x, y, 1), I'(x', y', 1), homography matrix H, then corresponding projection relation be:
Wherein k is proportionality coefficient, and usual H is the transformation matrix that the free degree is 8, works as h9When=1, it can be obtained by (1):
h1x+h2y+h3-h7xx′-h8yx′-x′=0 (2)
h4x+h5y+h6-h7xy-h8yy-y=0 (3)
As long as the coordinate for so having 4 pairs of match points can just calculate homography matrix H.
With reference to the above and Fig. 1 and Fig. 5, technical scheme is illustrated in detail further below:
A carries out foreground moving detection to the video sequence of input, extracts Moving Objects.
In application scenarios to be processed needed for the present invention, the object that Moving Objects are trapped in same position or background for a long time after entering movement suddenly is all situation about being likely to occur.The present invention extracts Moving Objects using remnant object detection method.
Leave analyte detection different from motion detection, it not only needs to detect non-existent object in original scene, also to judge whether the object stops in this scenario.
Specific method is:Set up two reference frame Ibg(x,y)、Iup(x,y)。Ibg(x, y) is the background frames of current scene(Do not include moving object, it is consistent with the background frames in general background modeling), Iup(x, y) is a reference frame being continuously updated over time, if current image frame is I (x, y), then update method is:
Iup(x,y)=(1-α)Iup(x,y)+αI(x,y) (1)
α is renewal speed weight, and α is bigger, and renewal rate is faster, and the smaller renewal rates of α are slower.If so external object is stopped in the scene, I will be incorporated through the object after a period of timeupIn (x, y), the part as " background ".By I (x, y) respectively with Ibg(x,y)、Iup(x, y) carries out difference binaryzation, and obtained result is designated as:Fbg(x,y)、Fup(x,y).Such as the pixel value F at fruit dot (x, y) placebg(x, y)=1, Fup(x, y)=0 item may determine that the point belongs to legacy, and legacy and the moving object in scene can be specifically told according to the method defined in table 1.
The remnant object detection method decisional table of table 1
Fbg(x,y) | Fup(x,y) | Judge type | |
Ⅰ | 1 | 1 | Moving Objects |
Ⅱ | 1 | 0 | Temporary transient stationary objects(Legacy) |
Ⅲ | 0 | 1 | Random noise |
Ⅳ | 0 | 0 | Stationary body in scene |
If b has preserved the feature of tracing object, step d is directly entered;If it is not, the region selected according to user completes the template initialization and SURF feature extractions to target object, and Kalman filter initialization.
If having preserved the feature of tracing object under the assigned catalogue of hard disk, then mean to there occurs scene change, now it is first directed to the Template Information and characteristic information of the destination object preserved under assigned catalogue, then global search matching is carried out in frame of video, if the moving object of a certain frame in exceedes some defined threshold value with the points that match of template, then think that target object occurs in this scenario, now needing initialization Kalman filter to be tracked processing.
If there is no the correlated characteristic for preserving tracing object under the assigned catalogue of hard disk, need user to be drawn by mouse on monitored picture and select target object.Algorithm carries out template initialization and SURF feature extractions according to the target object for drawing choosing.
The initialization of above-mentioned Kalman filter such as the following steps:
1)Due to the limitation of practical application scene of the present invention, big change will not typically occur for the movement velocity of target object in the scene, therefore the present invention is analyzed and handled to object using the equation of motion of uniform motion:
xt=xt-1+vx (2)
yt=yt-1+vy (3)
Formula (2) and formula (3) represent the equation of motion of the object in x-axis and y-axis direction, v respectivelyx、vyThe speed of object in the two directions is represented respectively.In t, the state representation of a moving object is Xt=(xt,yt,vx,vy)T, observation is Zt=(xt,yt)T。
2)Control Kalman filter model and the equation of motion understand that the state-transition matrix F and calculation matrix H of system are respectively:
3)According to the related experiment situation and reference of practical application scene of the present invention, the variance matrix Q and R of noise vector value distinguish as follows in the state migration procedure and observation process of the Kalman filter employed in the present invention:
4)System initial state vector covariance matrix P is defined as follows:
5)When the position adjustment of camera, the above matrix Q, R, P0Definition can be adjusted accordingly according to actual conditions.For original stateDefinition, this paper algorithms allow user-defined selection, i.e. user to draw the target object that select that a certain moment thinks tracking with mouse, and then system is according to the now observation position of the object top left corner apex coordinate and speed initialization
C is predicted tracking to moving target using the method based on Kalman filter, until video content terminates, into step e;When being blocked in tracing process, then into step d;
After being initialized to Kalman filter, system just can be predicted tracking to the position of target object.During tracking, template image is adaptively updated according to the profile variations situation of destination object, and representational characteristic information is preserved.
The present invention determines whether to update To Template and SURF Feature Descriptors in real time according to the profile variation of tracked object.Profile variation herein is primarily referred to as in the case where camera focal length is constant target because of the change of displacement position and direction, the change that the total pixel area in caused image shared by target occurs.I.e.
Am-An|>H (7)
When, judge that profile variation occurs for target, Am and An represent the elemental area shared by m and n moment targets respectively, and H is a threshold value, and the threshold value can be adjusted according to concrete scene.
After the elemental area shared by judgment object changes, system continues to judge whether the length-width ratio of the target object changes, if the now length-width ratio R of target objecttWith the length-width ratio R of the To Template preserved beforemCompared to generation significant change, then it is assumed that angle change occurs for object, it is necessary to update the Template Information of destination object.
Rt-Rm|>HR, Rt=WtHt (8)
HRThreshold value during to judge that destination object length-width ratio is varied widely, WtAnd HtThe width and height of t destination object are represented respectively
Judge the generation blocked.The present invention is using the determination methods intersected based on profile, and this method is quickly effective, and it needs blocking for two kinds of situations of solution:One is mutually blocking between object, and two be that destination object is blocked by background.
We understand that when destination object is blocked the elemental area shared by its profile is increased sharply or reduced.Elemental area increase represents to there occurs blocking between object, and elemental area reduces the situation for representing to there occurs background occlusion objects.Therefore after the binary image for obtaining moving object contours by motion detection, following formula is recycled to be judged it is known that at the time of blocking generation.
|St-St-1|>T (9)
StAnd St-1Current time and the elemental area shared by previous moment destination object are represented respectively, and T represents the threshold value set.The threshold value needs to be changed according to the focal length of camera in concrete scene.
D determines tracing object using the matching process based on SURF features, at the end of characteristic matching tends towards stability and judges to block, reinitializes after Kalman filter, into step c;
The characteristic point that some are matched between the two images obtained by SURF algorithm has the erroneous matching caused by measurement error and noise to being not right-on, in them.Therefore, the present invention is obtaining SURF characteristic matchings point to rear, is accurately matched using RANSAC algorithms and obtains the homography matrix of the conversion between image, so that spotting object in video.
RANSAC(random sample consensus)Algorithm is a kind of method for estimating mathematical model parameter iteration, and its basic ideas is to try to achieve most of sample by the method for stochastical sampling and checking(Refer to the characteristic point pair being mutually matched)The parameter for the mathematical modeling that can be met.
The present invention is comprised the following steps that with RANSAC algorithms:
1)Four pairs of SURF match points are randomly selected first as Initial Internal Points set, and transformation matrix H is calculated by four pairs of match points in initial sets.
2)Whether the point in judging outside point set can add the set.The distance between I ' and HI is calculated, if the value is less than the threshold value set, the point is added into interior point set.
3)Repeat 1)With 2)Step n times, choose that group set for counting most in interior point set as the matching point set finally needed.Finally according to the set, transformation matrix H is updated with least square method.
In above process, it is assumed that the ratio that the match point finally given accounts for initial SURF match points sum is p, then it is 1-p to randomly select four pairs of match points to be not all the final probability correctly matched4, the probability that four pairs of initial points of iteration n times are all not all correct match point is (1-p4)N, then the probability P for obtaining correct transformation matrix H is:
P=1-(1-p4)N (10)
In actual applications, both make the iterations N of algorithm less again to ensure that greater probability obtains transformation matrix H, typically take N between 10 ~ 20.
After transformation matrix H is obtained, four summits mapping of destination object in template is just obtained to the general location of objects in images in the past, computational methods are as follows:
xi′=h1*xi+h2*yi+h3 (11)
yi′=h4*xi+h5*yi+h6 (12)
(xi′,yi') represent the coordinate on target in video image object i-th of summit, (xi,yi) be template in destination object i-th of apex coordinate.
Judge the end blocked.Whether the method that the part is used is blocked with judgement uses that method is identical, i.e., whether judgment object profile intersects.At the end of the process of blocking, significant change can occur for the elemental area that occupies of agglomerate where the destination object observed in bianry image, by formula (9) it may determine that whether certain moment blocks terminate.
At the end of blocking, the result of the search matching before is reinitialized Kalman filter by system, then constantly predicts and more newly arrive continuing to complete tracking processing by Kalman filter.
E is exported and is preserved destination object characteristic information.
The need for user, destination object characteristic information is preserved to assigned catalogue.
Meaning of the present invention is:
The tracking process of object video is the key technology of intelligent video processing, is the premise of the senior semantic operation such as Activity recognition, event recognition, identification and processing.Blocking with the complex scene such as scene change, it is the focus and difficult point studied both at home and abroad fast and accurately to realize the tracking to object video.In recent years, people have carried out substantial amounts of research to the tracking problem under circumstance of occlusion, although existing method can solve subproblem at present, all problems can be solved well but without a kind of method.For example, although image layered method can solve to block tracking problem, complexity is high, it is difficult to meet the requirement of real-time;And the tracking initialization difficulty, object module based on color or profile update difficult, it is difficult to in real system;Using multiple-camera joint-monitoring it is the current popular method for solving occlusion issue in Same Scene, but this method is also very immature compared with the tracking under single camera at present, and cost of implementation and complexity are all very high.Compared to blocking, the tracking in the case of scene change is increasingly complex, and the documents and materials referred to both at home and abroad in the field at present are also less, and main solution is still expands to many scene tracking by the track algorithm under single game scape.And it is of the invention, it is not necessary to multiple cameras, it is only necessary to which the Moving Objects that a video camera can be just directed under fixed background environment carry out fast track, and when blocking with scene change still with higher accuracy rate and robustness.
Special instruction, present invention work is supported by project of national nature science fund project (61170253), the plan of Information Institute innovative research team of University Of Science and Technology Of Shandong.
The relevant technology contents do not addressed in aforesaid way are taken or used for reference prior art and can be achieved.
It should be noted that under the teaching of this specification those skilled in the art can also make it is such or it is such be easily varied mode, such as equivalent way, or substantially mode of texturing.Above-mentioned variation pattern all should be within protection scope of the present invention.
Claims (4)
1. it is a kind of towards the Moving Objects method for tracing blocked with scene change, it is characterised in that to comprise the following steps:
A carries out foreground moving detection to the video sequence of input, extracts Moving Objects;Subsequently into step b,
If b has preserved the feature of tracing object, into step d;If not preserving the feature of tracing object, the region selected according to user completes the template initialization and SURF feature extractions to target object, and Kalman filter initialization;Subsequently into step c,
C is predicted tracking to moving target using the method based on Kalman filter, until video content terminates, into step e;When being blocked in tracing process, then into step d;
D determines tracing object using the matching process based on SURF features, at the end of characteristic matching tends towards stability and judges to block, and reinitializes and enters step c after Kalman filter;
E is exported and is preserved destination object characteristic information.
2. it is according to claim 1 a kind of towards the Moving Objects method for tracing blocked with scene change, it is characterised in that:
In above-mentioned steps a, two reference frame I are set upbg(x,y)、Iup(x, y), Ibg(x, y) is the background frames of current scene, Iup(x, y) is a reference frame being continuously updated over time;By present frame I (x, y) respectively with Ibg(x,y)、Iup(x, y) carries out difference binaryzation, and obtained result is designated as:Fbg(x,y)、Fup(x, y), legacy and the moving object in scene are told according to the value of the two.
3. it is according to claim 1 or 2 a kind of towards the Moving Objects method for tracing blocked with scene change, it is characterised in that:
In above-mentioned steps c, Kalman filter is initialized first, tracking is then predicted according to the target object state observed;During tracking, template image is adaptively updated according to the profile variations situation of destination object, and representational characteristic information is preserved;In tracing process, using the determination methods intersected based on profile to whether blocking and being modeled, analyze and judge.
4. it is according to claim 3 a kind of towards the Moving Objects method for tracing blocked with scene change, it is characterised in that:
In above-mentioned steps d, automatic search video content simultaneously finds the prospect agglomerates most with tracked object features Point matching, for erroneous matching caused by measurement error and noise, obtain SURF characteristic matchings point rear use RANSAC algorithms are accurately matched and conversion obtaining image homography matrix, spotting object in video;Judgement block whether terminate with it is above-mentioned judge whether to block use same model;The method for reinitializing Kalman filter use is identical with the initial method of above-mentioned Kalman filter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310039754.3A CN103106667B (en) | 2013-02-01 | 2013-02-01 | A kind of towards blocking the Moving Objects method for tracing with scene change |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310039754.3A CN103106667B (en) | 2013-02-01 | 2013-02-01 | A kind of towards blocking the Moving Objects method for tracing with scene change |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103106667A true CN103106667A (en) | 2013-05-15 |
CN103106667B CN103106667B (en) | 2016-01-20 |
Family
ID=48314494
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310039754.3A Expired - Fee Related CN103106667B (en) | 2013-02-01 | 2013-02-01 | A kind of towards blocking the Moving Objects method for tracing with scene change |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103106667B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268480A (en) * | 2013-05-30 | 2013-08-28 | 重庆大学 | System and method for visual tracking |
CN103428408A (en) * | 2013-07-18 | 2013-12-04 | 北京理工大学 | Inter-frame image stabilizing method |
CN103985138A (en) * | 2014-05-14 | 2014-08-13 | 苏州盛景空间信息技术有限公司 | Long-sequence image SIFT feature point tracking algorithm based on Kalman filter |
CN104573613A (en) * | 2013-10-16 | 2015-04-29 | 深圳市捷顺科技实业股份有限公司 | Video security anti-smashing method and device based on blob tracking |
CN105139424A (en) * | 2015-08-25 | 2015-12-09 | 四川九洲电器集团有限责任公司 | Target tracking method based on signal filtering |
CN105469379A (en) * | 2014-09-04 | 2016-04-06 | 广东中星电子有限公司 | Video target area shielding method and device |
CN105938622A (en) * | 2015-03-02 | 2016-09-14 | 佳能株式会社 | Method and apparatus for detecting object in moving image |
CN106228577A (en) * | 2016-07-28 | 2016-12-14 | 西华大学 | A kind of dynamic background modeling method and device, foreground detection method and device |
CN107257980A (en) * | 2015-03-18 | 2017-10-17 | 英特尔公司 | Local change in video detects |
CN107808393A (en) * | 2017-09-28 | 2018-03-16 | 中冶华天南京电气工程技术有限公司 | There is the method for tracking target of anti-interference in field of intelligent video surveillance |
CN107993255A (en) * | 2017-11-29 | 2018-05-04 | 哈尔滨工程大学 | A kind of dense optical flow method of estimation based on convolutional neural networks |
CN108734109A (en) * | 2018-04-24 | 2018-11-02 | 中南民族大学 | A kind of visual target tracking method and system towards image sequence |
CN108921879A (en) * | 2018-05-16 | 2018-11-30 | 中国地质大学(武汉) | The motion target tracking method and system of CNN and Kalman filter based on regional choice |
CN110060276A (en) * | 2019-04-18 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Object tracking method, tracking process method, corresponding device, electronic equipment |
CN110349182A (en) * | 2018-04-07 | 2019-10-18 | 苏州竺星信息科技有限公司 | A kind of personage's method for tracing based on video and positioning device |
CN111325217A (en) * | 2018-12-14 | 2020-06-23 | 北京海益同展信息科技有限公司 | Data processing method, device, system and medium |
CN112085769A (en) * | 2020-09-09 | 2020-12-15 | 武汉融氢科技有限公司 | Object tracking method and device and electronic equipment |
CN112287867A (en) * | 2020-11-10 | 2021-01-29 | 上海依图网络科技有限公司 | Multi-camera human body action recognition method and device |
CN113468931A (en) * | 2020-03-31 | 2021-10-01 | 阿里巴巴集团控股有限公司 | Data processing method and device, electronic equipment and storage medium |
CN115082509A (en) * | 2022-08-22 | 2022-09-20 | 成都大公博创信息技术有限公司 | Method for tracking non-feature target |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101170683A (en) * | 2006-10-27 | 2008-04-30 | 松下电工株式会社 | Target moving object tracking device |
CN102034114A (en) * | 2010-12-03 | 2011-04-27 | 天津工业大学 | Characteristic point detection-based template matching tracing method |
CN102881022A (en) * | 2012-07-20 | 2013-01-16 | 西安电子科技大学 | Concealed-target tracking method based on on-line learning |
-
2013
- 2013-02-01 CN CN201310039754.3A patent/CN103106667B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101170683A (en) * | 2006-10-27 | 2008-04-30 | 松下电工株式会社 | Target moving object tracking device |
CN102034114A (en) * | 2010-12-03 | 2011-04-27 | 天津工业大学 | Characteristic point detection-based template matching tracing method |
CN102881022A (en) * | 2012-07-20 | 2013-01-16 | 西安电子科技大学 | Concealed-target tracking method based on on-line learning |
Non-Patent Citations (2)
Title |
---|
宋宁: "基于SURF的运动目标检测与跟踪方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 December 2012 (2012-12-15) * |
李杰: "视频序列中运动目标检测跟踪算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 November 2010 (2010-11-15) * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268480A (en) * | 2013-05-30 | 2013-08-28 | 重庆大学 | System and method for visual tracking |
CN103428408B (en) * | 2013-07-18 | 2016-08-10 | 北京理工大学 | A kind of image digital image stabilization method being applicable to interframe |
CN103428408A (en) * | 2013-07-18 | 2013-12-04 | 北京理工大学 | Inter-frame image stabilizing method |
CN104573613A (en) * | 2013-10-16 | 2015-04-29 | 深圳市捷顺科技实业股份有限公司 | Video security anti-smashing method and device based on blob tracking |
CN104573613B (en) * | 2013-10-16 | 2018-05-01 | 深圳市捷顺科技实业股份有限公司 | A kind of Video security based on mass tracking is prevented pounding method and device |
CN103985138A (en) * | 2014-05-14 | 2014-08-13 | 苏州盛景空间信息技术有限公司 | Long-sequence image SIFT feature point tracking algorithm based on Kalman filter |
CN105469379A (en) * | 2014-09-04 | 2016-04-06 | 广东中星电子有限公司 | Video target area shielding method and device |
CN105469379B (en) * | 2014-09-04 | 2020-07-28 | 广东中星微电子有限公司 | Video target area shielding method and device |
CN105938622A (en) * | 2015-03-02 | 2016-09-14 | 佳能株式会社 | Method and apparatus for detecting object in moving image |
US10417773B2 (en) | 2015-03-02 | 2019-09-17 | Canon Kabushiki Kaisha | Method and apparatus for detecting object in moving image and storage medium storing program thereof |
CN107257980A (en) * | 2015-03-18 | 2017-10-17 | 英特尔公司 | Local change in video detects |
CN105139424A (en) * | 2015-08-25 | 2015-12-09 | 四川九洲电器集团有限责任公司 | Target tracking method based on signal filtering |
CN105139424B (en) * | 2015-08-25 | 2019-01-18 | 四川九洲电器集团有限责任公司 | Method for tracking target based on signal filtering |
CN106228577B (en) * | 2016-07-28 | 2019-02-19 | 西华大学 | A kind of dynamic background modeling method and device, foreground detection method and device |
CN106228577A (en) * | 2016-07-28 | 2016-12-14 | 西华大学 | A kind of dynamic background modeling method and device, foreground detection method and device |
CN107808393B (en) * | 2017-09-28 | 2021-07-23 | 中冶华天南京电气工程技术有限公司 | Target tracking method with anti-interference performance in intelligent video monitoring field |
CN107808393A (en) * | 2017-09-28 | 2018-03-16 | 中冶华天南京电气工程技术有限公司 | There is the method for tracking target of anti-interference in field of intelligent video surveillance |
CN107993255B (en) * | 2017-11-29 | 2021-11-19 | 哈尔滨工程大学 | Dense optical flow estimation method based on convolutional neural network |
CN107993255A (en) * | 2017-11-29 | 2018-05-04 | 哈尔滨工程大学 | A kind of dense optical flow method of estimation based on convolutional neural networks |
CN110349182A (en) * | 2018-04-07 | 2019-10-18 | 苏州竺星信息科技有限公司 | A kind of personage's method for tracing based on video and positioning device |
CN108734109B (en) * | 2018-04-24 | 2020-11-17 | 中南民族大学 | Visual target tracking method and system for image sequence |
CN108734109A (en) * | 2018-04-24 | 2018-11-02 | 中南民族大学 | A kind of visual target tracking method and system towards image sequence |
CN108921879A (en) * | 2018-05-16 | 2018-11-30 | 中国地质大学(武汉) | The motion target tracking method and system of CNN and Kalman filter based on regional choice |
CN111325217A (en) * | 2018-12-14 | 2020-06-23 | 北京海益同展信息科技有限公司 | Data processing method, device, system and medium |
CN111325217B (en) * | 2018-12-14 | 2024-02-06 | 京东科技信息技术有限公司 | Data processing method, device, system and medium |
CN110060276A (en) * | 2019-04-18 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Object tracking method, tracking process method, corresponding device, electronic equipment |
CN110060276B (en) * | 2019-04-18 | 2023-05-16 | 腾讯科技(深圳)有限公司 | Object tracking method, tracking processing method, corresponding device and electronic equipment |
CN113468931A (en) * | 2020-03-31 | 2021-10-01 | 阿里巴巴集团控股有限公司 | Data processing method and device, electronic equipment and storage medium |
CN112085769A (en) * | 2020-09-09 | 2020-12-15 | 武汉融氢科技有限公司 | Object tracking method and device and electronic equipment |
CN112287867A (en) * | 2020-11-10 | 2021-01-29 | 上海依图网络科技有限公司 | Multi-camera human body action recognition method and device |
CN115082509A (en) * | 2022-08-22 | 2022-09-20 | 成都大公博创信息技术有限公司 | Method for tracking non-feature target |
CN115082509B (en) * | 2022-08-22 | 2022-11-04 | 成都大公博创信息技术有限公司 | Method for tracking non-feature target |
Also Published As
Publication number | Publication date |
---|---|
CN103106667B (en) | 2016-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103106667A (en) | Motion target tracing method towards shielding and scene change | |
US11763485B1 (en) | Deep learning based robot target recognition and motion detection method, storage medium and apparatus | |
CN105335986B (en) | Method for tracking target based on characteristic matching and MeanShift algorithm | |
Dame et al. | Dense reconstruction using 3D object shape priors | |
Park et al. | Comparative study of vision tracking methods for tracking of construction site resources | |
CN111210477B (en) | Method and system for positioning moving object | |
WO2019057179A1 (en) | Visual slam method and apparatus based on point and line characteristic | |
CN104851094A (en) | Improved method of RGB-D-based SLAM algorithm | |
CN103268616A (en) | Multi-feature multi-sensor method for mobile robot to track moving body | |
CN104036523A (en) | Improved mean shift target tracking method based on surf features | |
Qu et al. | Evaluation of SIFT and SURF for vision based localization | |
CN104794737A (en) | Depth-information-aided particle filter tracking method | |
CN112052802A (en) | Front vehicle behavior identification method based on machine vision | |
Iraei et al. | Object tracking with occlusion handling using mean shift, Kalman filter and edge histogram | |
CN102289822A (en) | Method for tracking moving target collaboratively by multiple cameras | |
Zhao et al. | A robust stereo feature-aided semi-direct SLAM system | |
CN111027586A (en) | Target tracking method based on novel response map fusion | |
CN110826575A (en) | Underwater target identification method based on machine learning | |
Prisacariu et al. | Robust 3D hand tracking for human computer interaction | |
CN103985141B (en) | Method for tracking target based on hsv color covariance feature | |
Min et al. | Coeb-slam: A robust vslam in dynamic environments combined object detection, epipolar geometry constraint, and blur filtering | |
CN102663773A (en) | Dual-core type adaptive fusion tracking method of video object | |
Fangfang et al. | Real-time lane detection for intelligent vehicles based on monocular vision | |
Han et al. | GardenMap: Static point cloud mapping for Garden environment | |
CN116429087A (en) | Visual SLAM method suitable for dynamic environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160120 Termination date: 20190201 |
|
CF01 | Termination of patent right due to non-payment of annual fee |