CN102184551A - Automatic target tracking method and system by combining multi-characteristic matching and particle filtering - Google Patents

Automatic target tracking method and system by combining multi-characteristic matching and particle filtering Download PDF

Info

Publication number
CN102184551A
CN102184551A CN2011101189182A CN201110118918A CN102184551A CN 102184551 A CN102184551 A CN 102184551A CN 2011101189182 A CN2011101189182 A CN 2011101189182A CN 201110118918 A CN201110118918 A CN 201110118918A CN 102184551 A CN102184551 A CN 102184551A
Authority
CN
China
Prior art keywords
target
particle
tracking
particle filter
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011101189182A
Other languages
Chinese (zh)
Inventor
魏颖
吴迪
贾同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN2011101189182A priority Critical patent/CN102184551A/en
Publication of CN102184551A publication Critical patent/CN102184551A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an automatic target tracking method and system by combining multi-characteristic matching and particle filtering. The target tracking system provided by the invention comprises a video acquisition module, a tracking algorithm computation module and an output control module, wherein the video acquisition module finishes the initialization of an acquisition card and the real-time acquisition of images; the tracking algorithm computation module comprises three tracking modes: the particle filtering tracking based on gray template region matching, the particle filtering tracking based on color probability distribution and the particle filtering tracking based on SIFT (scale invariant feature transform) characteristic matching, thereby realizing the target tracking of translation space and affine space; and the output control module utilizes the center of the target position obtained by tracking as a control command which is transmitted to a cloud deck, thereby realizing the motion of a camera along with a target object.

Description

Automatic Target Tracking method and system in conjunction with various features coupling and particle filter
Technical field
The present invention relates to method for tracking target and system in conjunction with various features coupling and particle filter, relate in particular at translation space and affine space and realize target following, comprise particle filter tracking, based on the particle filter tracking of color probability distribution with based on three kinds of methods of particle filter tracking of SIFT characteristic matching based on gray scale template zone coupling.
Background technology
The target following result has contained a large amount of space time informations of motion unit in the scene, all is widely used aspect many in military visual guidance, robot visual guidance, safety monitoring, traffic control, medical diagnosis, virtual reality and battlefield warning, public safety supervision, video compress and meteorologic analysis etc.
From twentieth century since the eighties, target following has proposed a lot of algorithms to Chinese scholars to video image, visual tracking method can be divided into following four classes: (1) is based on the tracking in zone, its advantage is when target is not blocked, tracking accuracy is very high, follow the tracks of highly stablely, but its shortcoming at first is time-consuming, and situation is especially serious when the region of search is big; Secondly, algorithm requires target distortion little, and can not have too greatly and block, otherwise relevant precise decreasing can cause losing of target.(2) based on the tracking of feature, be blocked,, just can finish tracing task as long as some feature can be in sight even its advantage is certain part of target.The difficult point of this algorithm is: to certain moving target, how to determine it unique feature set this also be a pattern recognition problem.If adopt feature too much, system effectiveness will reduce, and be easy to generate mistake.(3) based on the tracking of deforming template, deforming template commonly used is the active contour model that was proposed in 1987 by Kass, is called the Snake model again.The Snake model is fit to the tracking of deformable object very much, and this model combines with Kalman filtering and can follow the tracks of better.But the Snake model relatively is fit to the tracking of single goal.(4) based on the tracking of model, its advantage is the 3 D motion trace of evaluating objects accurately, even under the situation that moving object attitude changes, also can follow the tracks of reliably.But its shortcoming is that the precision of motion analysis depends on the precision of geometric model, and the precise geometrical model that will obtain all moving targets in actual life is very difficult.This has just limited the use based on the track algorithm of model, simultaneously, often needs a large amount of operation time based on 3D model following algorithm, is difficult to realize real-time motion target tracking.
The thinking of following the tracks of based on Bayesian object video is that the target following problem is converted to the Bayesian Estimation problem, and the prior probability of known target state is constantly found the solution the process of the maximum a posteriori probability of dbjective state after obtaining new measuring value.Particle filter (particle filter) is a kind of practical algorithm of finding the solution Bayesian probability, Monte Carlo simulation method by imparametrization realizes recursion Bayes filtering, be applicable to any nonlinear system that can represent with state-space model, and the nonlinear system that can't represent of legacy card Kalman Filtering, precision can be forced into optimal estimation.The use of particle filter method is very flexible, realizes having parallel organization easily, and is quite practical.Particle filter has more practical value than traditional Bayes's wave filter (Kalman filter, lattice filter etc.).By analysis, can construct more effective, speed particle filter algorithm faster to tracking problem.
The present invention describes method for tracking target and the system in conjunction with various features coupling and particle filter, has made up based on the tracking framework and the software and hardware of particle filter and has realized system.Core content comprises feature and the extracting method thereof of target on three different levels, with gray feature, color characteristic and the SIFT feature characteristic manner as target, has realized the particle filter tracking based on these three kinds of features respectively.Utilize BL-E854CB type network audio-video video camera, DR68 series The Cloud Terrace, perseverance recalls video frequency collection card and PC constitutes a cover vision track system, adopt the VC++6.0 Programming with Pascal Language to realize the particle filter tracking that mates based on gray scale template zone, particle filter tracking based on the color probability distribution, particle filter tracking and CamShift based on the SIFT characteristic matching follow the tracks of, native system can be automatically and the automanual tracking target that extracts, and has bigger yardstick at moving target, rotation, affine deformation, brightness, contrast changes and target exists under the situation such as partial occlusion and can both realize stable target following.
Summary of the invention
The invention provides method for tracking target and system, make up based on the tracking framework and the software and hardware of particle filter and realize system in conjunction with various features coupling and particle filter.
Particular content of the present invention is as follows:
(1) proposition and design are based on the particle filter method for tracking target of three kinds of features
Core content of the present invention is the particle filter target tracking algorism that has designed based on three kinds of features: based on the particle filter tracking of gray scale template zone coupling, based on the particle filter tracking of color probability distribution, based on the particle filter tracking of SIFT characteristic matching.
1. based on the regional particle filter tracking that mates of gray scale template
Based on the particle filter tracking of gray scale template zone coupling is to be the goal description form with the gray scale template in the traditional area coupling tracking, represents the estimated value of target component with the particle weighted sum, and the weights and the matching value of particle are proportional.Region Matching Algorithm and particle filter tracking method are combined, both embodied regional coupling track algorithm practical characteristics directly perceived, embodied the advantage that particle filter " multimodal " is followed the tracks of again, having improved the robustness of following the tracks of greatly, also is simultaneously a kind of practical approach of carrying out the kinematic parameter search at affine space.
2. based on the particle filter tracking of color probability distribution
The color characteristic of target is the most basic one of the feature of meaning of feeling.The same with profile, angle point, textural characteristics, color characteristic also belongs to the low-level image feature of target, can be to people's impression more intuitively, and do not need complicated especially semantic description.
Color is a strong son of describing, and it usually can simplify the differentiation of object and extracting objects from scene.The people can distinguish several thousand kinds of shade of color and brightness, can only distinguish tens kinds of gray-levels by contrast.This also is why in the computer vision direction, is more prone to that image is transformed into color space from gray space and handles.For the tracking (such as human body tracking) of non-rigid object, especially be fit to follow the tracks of with color characteristic.
The present invention in conjunction with " multimodality " advantage of particle filter, at first calculates the color histogram of target from another angle, carries out color histogram probability back projection then, carries out the particle filter prediction and observe output on color probability distribution image.Represent a kind of possibility attitude of target with a particle, and calculate the weights (correlation between measurement and the true attitude of target) of particle, represent the estimated value of targeted attitude by the particle weighted sum.
3. based on the particle filter tracking of SIFT characteristic matching.
SIFT (Scale Invariant Feature Transform is the conversion of yardstick invariant features) characteristic matching algorithm is a kind of algorithm that at present domestic and international characteristic matching research field obtains the comparison success, this algorithmic match ability is stronger, can extract stable characteristics, can handle between two width of cloth images translation takes place, rotation, affined transformation, view transformation, matching problem under the illumination change situation, the even to a certain extent image of arbitrarily angled shooting is also possessed comparatively stable characteristics matching capacity, thereby the coupling of the feature between two width of cloth images that can realize differing greatly.SIFT characteristic matching technology seldom is used in the tracking, and its main cause is that SIFT feature calculation complexity is very high, so limited its application on the exigent tracking technique of real-time.
The present invention proposes a kind of particle filter tracking technology based on SIFT characteristic matching technology, this method is not to calculate entire image as traditional SIFT characteristic matching technology, but utilize the Forecasting Methodology of particle filter technology, in the target neighborhood, calculate the SIFT feature, can significantly reduce some unnecessary calculating like this, thereby reduce system operation time, and to also keeping tracking stability to a certain degree under the situations such as brightness variation, visual angle change, affined transformation, noise.
(2) design of hardware and software of Target Tracking System
The present invention adopts cooperative work of software and hardware, the establishing target tracker.Utilize BL-E854CB type network audio-video video camera, DR68 series The Cloud Terrace and perseverance to recall video frequency collection card and constitute target following hardware system, hardware system connection layout such as accompanying drawing 1.Software systems comprise video acquisition module, track algorithm computing module and output control module, realize all software functions in conjunction with VC++6.0 on PC, software system structure such as accompanying drawing 2.
Description of drawings
Fig. 1 connects block diagram for equipment
Fig. 2 is system software structure figure
Fig. 3 is the particle filter tracking algorithm flow chart based on gray scale template zone coupling
Fig. 4 is the particle filter algorithm process flow diagram based on the color probability distribution
Fig. 5 is the particle filter tracking algorithm flow chart based on the SIFT characteristic matching
Fig. 6 is the equipment pictorial diagram
Fig. 7 is control area figure
Fig. 8 is software initial interface figure
Fig. 9 journey process flow diagram of serving as theme
Figure 10 is sub-thread 1 process flow diagram
Figure 11 is sub-thread 2 and sub-thread 3 process flow diagrams, and wherein (a) figure is sub-thread 2 process flow diagrams, and (b) figure is sub-thread 3 process flow diagrams
Embodiment
The present invention proposes method for tracking target and the system in conjunction with various features coupling and particle filter, makes up based on the tracking framework and the software and hardware of particle filter and realizes system.
Specific embodiments of the present invention are as follows:
(1) the particle filter method for tracking target of three kinds of features of proposition
Core content of the present invention is the particle filter target tracking algorism that has proposed based on three kinds of features: based on the particle filter tracking of gray scale template zone coupling, based on the particle filter tracking of color probability distribution, based on the particle filter tracking of SIFT characteristic matching.Specific as follows
1. based on the regional particle filter tracking algorithm that mates of gray scale template
Based on the particle filter tracking of gray scale template zone coupling is exactly to be the goal description form with the gray scale template in the traditional area coupling tracking, represents the estimated value of target component with the particle weighted sum, and the weights and the matching value of particle are proportional.Region Matching Algorithm and particle filter tracking method are combined, both embodied regional coupling track algorithm practical characteristics directly perceived, embodied the advantage that particle filter " multimodal " is followed the tracks of again, can improve the robustness of tracking greatly, also be simultaneously a kind of effective ways that carry out the kinematic parameter search at affine space.
What need explanation a bit is that the whole bag of tricks that the present invention considered all is the kinematic parameter that is conceived to find the solution target, i.e. the position of target, angle, yardstick or the like.In order to embody the advantage of particle filter to the traditional area matching algorithm, the present invention will consider the tracking of affine space.For affine model, need to consider six parameters of target, we do not consider tangential yardstick SXY here, promptly only consider five parameter T=(TX, TY, SX, SY, θ), wherein TX and TY are respectively the level orientation and the vertical orientations coordinate of target's center's point, SX, SY are respectively the yardstick of horizontal direction and vertical direction, and θ is the anglec of rotation of target with respect to template.Such particle is just represented a kind of possible motion state of target, promptly has one group of possible kinematic parameter (T).Kinematic parameter according to particle, can obtain a kind of distortion situation of the pairing To Template of this particle, by calculating the matching value of this deforming template and real image, particle is given and the proportional weights of matching value, represent the posterior probability of dbjective state by the particle weighting.
(1) priori of target
The priori of target comprises the foundation of gray scale template of target and the initial work of particle.
In initial frame, can obtain the initial description of target with the method for detection of automatic targets such as method of difference or man-machine interaction.In the present invention, adopt rectangle frame to represent target attitude.For a rectangle frame, its attitude comprises five parameters:
Figure BSA00000491614800051
Wherein: cx and cy are respectively the center of rectangle frame, and h and w are respectively the height and the width of rectangle frame,
Figure BSA00000491614800052
Be the angle of rectangle frame along h direction and x axle, initial value is made as 90 °.
Like this, when following the tracks of beginning, the priori (k of target 0Constantly) comprised the image template f that a size is m * n (a, b), (a=1 ... m, b=1 ... and the initial motion parameter of target n),
Figure BSA00000491614800053
Get particle numerical digit Ns, its weights ω ' initial value is 1, and each particle is represented a possible motion state of target, and each particle all has five parameters:
T′=(TX′,TY′,θ′,SX′,SY′),i=1,2,…Ns (3)
The initial value of particle parameter is taken as:
TX′=TX mit+b 1ξ,TY′=TY mit+b 2ξ,θ′=θ mit+b 3ξ,
SX′=SX mit+b 4ξ,SY′=SY mit+b 5ξ,i=1…Ns (4)
Wherein, ξ is the random number in [1,1], b 1, b 2, b 3, b 4, b 5It is constant.
(2) system state shifts
At k constantly thereafter t(t>0) utilizes the system state equation of transfer that each particle is carried out status predication.
Get single order ARP equation: x t=Ax T-1+ Bw T-1(5)
Promptly, have particle N ':
TX t i = A 1 T X t - 1 i + B 1 w t - 1 , TY t i = A 2 TY t - 1 i + B 2 w t - 1 , θ t i = A 3 θ t - 1 i + B 3 w t - 1 , SX t i = A 4 SX t - 1 i + B 4 w t - 1 , SY t i = A 5 SY t - 1 i + B 5 w t - 1 , i = 1 · · · Ns . - - - ( 6 )
Wherein, A 1A 5, B 1B 5Be constant, w T-1Be the random number in [1,1].
Get A under the simple scenario 1=A 2=A 3=A 4=A 5=1, and claim B 1B 5For particle is propagated radius.The implication that this moment, system state shifted promptly is disturbance quantity of superposition on quantity of state respectively.
When the state propagation of considering target has speed or acceleration, adopt second order ARP model.
Second order ARP model can be expressed as:
x t=Ax t-2+Bx t-1+Cw t-1 (7)
Wherein, A, B, C are constant, w T-1Be the random number in [1,1].
(3) systematic observation
Can observe it after each particle is propagated, just observe the target possibility state of each particle representative and the similarity degree between the target time of day, give bigger weights near the particle of target time of day, otherwise weights be less.
Get minimum average B configuration absolute difference function for weighing the instrument of similarity degree, promptly can calculate a similar value MAD ', i=1 each particle ... Ns.The angle of original template is 90 °, and for particle i, because be prediction, so its angle θ has all directions, and generally be not equal to 90 °, like this just need calculate MAD ' time, the particle zone under the particle i rotated to utilize minimum average B configuration absolute difference function to carry out the calculating of similar value after 90 ° again.
Definition observation probability density function is: p ( z k | x k i ) = exp { - 1 2 σ 2 MAD i } - - - ( 8 )
Wherein, σ is a constant.The implication of following formula is for carrying out the Gaussian modulation to matching value.
The weights of each particle like this
Figure BSA00000491614800062
Carrying out recursion by (2.25) formula calculates:
w k i = w k - 1 i p ( z k | x k i ) - - - ( 9 )
At last, the particle weights are carried out normalized.
(4) posterior probability is calculated
k tPosterior probability constantly, just desired target component in the target following
Figure BSA00000491614800064
Can represent by the weighted sum of each particle, that is:
TX t opt = Σ i = 1 Ns ω t i TX t i , TY t opt = Σ i = 1 Ns ω t i TX t i , θ t opt = Σ i = 1 Ns ω t i θ t i , SX t opt = Σ i = 1 Ns ω t i SX t i , SY t opt = Σ i = 1 Ns ω t i SY t i , i = 1 · · · Ns . - - - ( 10 )
Wherein:
Figure BSA000004916148000610
Be the weights after i the particle normalization,
Figure BSA000004916148000611
It is the state parameter of i particle.
So far a tracing process finishes.Next tracking is constantly still shifted from system state and is restarted.
(5) particle resamples
The basic problem that sequence importance sampling algorithm exists is exactly the particle degradation phenomena, after promptly passing through a few step iteration recursion, it is very little that the weights of many particles can become, and has only a few particle to have big weights, and a large amount of calculating then is wasted on the little weights particle.
Particle resample to be exactly to derive some particles from the big particle of weights and to replace them when some particle weights occurring too hour.We only need threshold value of definition, just carry out this process after the weights of some particle reach certain lower limit, and the weights of " offspring " particle of this particle are re-set as 1.Resampling process and other processes (system state transfer, systematic observation, goal description) etc. all have nothing to do, and are not doing other explanation in the chapters and sections thereafter.
(6) the particle filter tracking algorithm flow that mates based on gray scale template zone
The front is described the concrete function based on each module of particle filter tracking algorithm of gray scale template zone coupling in detail, and here, the present invention has realized above flow process with VC++6.0, and the agent structure block diagram of program as shown in Figure 4.
At first determine population, select motion model.The selection of population is relevant with actual tracer request, and population is many more generally speaking, follows the tracks of stablely more, and precision is high more, but calculated amount is also big more simultaneously.The actual occasion of following the tracks of can be carried out compromise selection, or dynamically adjusts.We select to have the translation model of two parameters and the affine model with five parameters motion model, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.Motion model is in case selected, and particle is promptly consistent with it, has the parameter of same dimension.
Judge whether that subsequently target chooses, dual mode is manually chosen or is chosen automatically in the employing of choosing of target, manually choose is exactly to adopt artificial mode, on screen, choose a zone as tracking target with mouse, choose automatically then to utilizing the image difference method of grading to obtain wanting the target of following the tracks of.After the target area is determined, just set up To Template, initialization particle parameter, and the particle weights all are made as 1 (promptly all particles are of equal importance).
Frame after the particle initialization just changes the iterative process of particle filter algorithm over to.In each frame, each particle carried out system state shifts and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state.Carry out particle resampling process at last, change algorithm iteration process next time over to.
Particle filter tracking algorithm flow chart such as accompanying drawing 3. that the present invention proposes based on gray scale template zone coupling
2. based on the particle filter tracking algorithm of color probability distribution
Image histogram is a kind of crucial image analysis tool in the Flame Image Process, it has described the intensity level content of piece image, the histogram of any piece image has all comprised abundant information, and it mainly is used in image Segmentation, in the processing procedures such as image gray-scale transformation.Image histogram is the function of each gray-scale value statistical property of image and gradation of image value from mathematics, each gray level occurs in its statistics piece image number of times or probability; From figure, it is an X-Y scheme, the gray level of each pixel in the horizontal ordinate presentation video, and ordinate is number of times or the probability that each pixel of each gray level epigraph occurs.
(1) statistics of color histogram
RGB and hsv color histogram are two kinds of the most frequently used color space models.Most digital picture all is to express with the RGB color space, and still, the rgb space structure does not also meet the subjective judgement of people to color similarity.And the subjective understanding of people to color more approached in the hsv color space, and its three components are represented color (Hue), saturation degree (saturation) and brightness (value) respectively.
The present invention at first is transformed into the HSV space to the pixel of target area by rgb space, calculates the one dimension histogram of H component then, and carries out normalization (being exactly that the H component value is mapped in this scope of 0-1).Element quantization adopts the simplest the most frequently used method, is about to the H component and is divided into several little intervals equably, makes each minizone become a histogrammic bin.Wherein the number of bin is relevant with the efficient requirement with the concrete performance of using, and its number is many more, and histogram is just strong more to the resolution capabilities of color, but the very big histogram of the number of bin can increase computation burden.Consider the requirement of vision track real-time and the scope of H component, the number of H component bin is made as 48 in the system.
(2) calculating of color probability distribution graph
The purpose of calculating color histogram is in order to obtain color probability distribution image, on this basis, also to need raw video image is transformed into color probability distribution image by color histogram.Pixel in the raw video image is replaced by the statistic of respective pixel in the color histogram, just obtains color probability distribution image behind the re-quantization as a result that will obtain then.
Pixel in the raw video image is the amount that is used for describing light intensity, and the pixel in color probability distribution figure is the value that is used for measuring certain " possibility ", and what this possibility was represented is the probability that moving target appears at this pixel position.Later tracing process all is to act on this color probability distribution image.
The present invention calculates the color probability distribution graph like this: for each two field picture of follow-up collection, at first be transformed into the HSV space from rgb space, to the H component of each location of pixels, calculate the normalized color histogram of target then, obtain the pixel value of color probability distribution image correspondence position by following formula:
B(i,j)=H(i,j)×Hist(h) (11)
Wherein (i j) is the value of the pixel in the color probability distribution graph to B, and (i j) is the H component value of pixel in the image of being gathered to H, and Hist (h) is (i, j) the Histogram value of place bin of H in color histogram.When all pixels all calculate finish after, just obtained a color probability distribution image.
When the S component was very little, the H component that calculates can be very big, can cause very big error like this, so the present invention establishes a threshold value, when S component during less than this threshold value, the corresponding H component of order is 0, through overtesting, threshold value is made as 30.Through the color probability distribution graph of revising, again all negates of the value of each pixel, at last the pixel value of color probability distribution image be zero point replace to pixel value be one very little on the occasion of, so that particle filter can be good at being adapted to dimensional variation when calculating weights, so just obtain a width of cloth can be used for the image of particle filter tracking.
(3) based on the particle filter tracking algorithm of color probability distribution
From as can be known to the analysis of color probability distribution graph, the pixel close with color of object has bigger probable value, and the pixel value at target place has maximum probable value, like this, under the situation that the target priori has obtained, disseminate the cluster particle with initial position, then these particles are observed, to each particle, calculate in this particle state scope pixel value and, modulate and normalization just can obtain the weights of each particle with exponential function, can see, the particle approaching more with dbjective state, weights can be big more.Calculate posterior probability output at last, so just finished the primary particle filter tracking, under the not very big scene of background interference, can carry out real-time tracking, and change for three-dimensional affine model and also can well follow the tracks of.The present invention's research has also realized this algorithm.
Adopt the color probability Distribution Model, be equivalent to original image has been mapped in the probability gray level image, therefore, under the color of object jumping characteristic is not very big situation, to the edge block, target rotation, distortion and background motion be insensitive.So this model is well suited for the tracking of affine space.
The present invention adopts affine model to follow the tracks of, and the purpose of tracking then is motion state T=(TX, the TY that finds the solution target, θ, SX, SY, SXY), TX wherein, TY is respectively the target's center of x direction and y direction, and θ is the angle of target rotation, SX, SY and SXY are target in x direction, y direction with to the yardstick (be simple and Convenient Calculation, do not consider SXY here) of angular direction.
Particle filter tracking based on the color probability distribution just utilizes Bayes's recursive filtering method, use Monte Carlo simulation, the posterior probability that is used as dbjective state (T) with the weighting of plurality of particles represents, each particle represents a kind of of target may motion state (T).Kinematic parameter according to particle, can obtain a kind of distortion situation of the pairing To Template of this particle, by calculating this deforming template shared ratio value that gets in real image, give relevant weights to particle, represent the posterior probability of dbjective state by the particle weighting.
As space is limited, just do not give unnecessary details one by one based on processes such as the target priori of the particle filter tracking system of color probability distribution, state transitions, systematic observation, posterior probability calculating, particle resamplings, referring to analysis and the description in the trifle on this instructions " based on the particle filter tracking algorithm of gray scale template zone coupling ".
Based on the particle filter tracking algorithm flow chart of color probability distribution as shown in Figure 4.
In this algorithm, at first need to determine number of particles and motion model, in the present invention, we select affine model with five parameters motion model, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.Motion model is in case selected, and particle is promptly consistent with it, has the parameter of same dimension.
To judge whether that then target chooses, the choosing can adopt equally of target manually chosen or chosen automatically, after the target area is determined, just sets up the color histogram of To Template, initialization particle parameter, and the particle weights all are made as 1 (promptly all particles are of equal importance).
Frame after the particle initialization will change the iterative process of particle filter algorithm over to.In each frame, at first image is transformed into the HSV space by rgb space, calculate the color probability distribution graph then, each particle is carried out system state on the color probability distribution graph shift and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state.Carry out particle resampling process at last, change algorithm iteration process next time over to.
3. based on the particle filter tracking algorithm of SIFT characteristic matching
SIFT (Scale Invariant Feature Transform is the conversion of yardstick invariant features) characteristic matching is a kind of method that at present domestic and international characteristic matching research field obtains the comparison success, it is stronger that it has a matching capacity, can extract stable characteristics, can handle between two width of cloth images translation takes place, rotation, affined transformation, view transformation, matching problem under the illumination change situation, the even to a certain extent image of arbitrarily angled shooting is also possessed comparatively stable characteristics matching capacity, thereby the coupling of the feature between two width of cloth images that can realize differing greatly.
SIFT characteristic matching technology seldom is used in the tracking, and its main cause is that SIFT feature calculation complexity is very high, has limited its application on the demanding tracking technique of real-time.The present invention proposes a kind of particle filter tracking method based on SIFT characteristic matching technology, the inventive method is not to calculate entire image as traditional SIFT characteristic matching technology, but utilize the Forecasting Methodology of particle filter technology, in the target neighborhood, calculate the SIFT feature, can significantly reduce some unnecessary calculating like this, thereby reduce system operation time.
(1) computation process of SIFT characteristic matching
The SIFT feature is the local feature of image, and this feature changes rotation, scale, brightness and maintains the invariance, and visual angle change, affined transformation, noise are also kept to a certain degree stability.
The principal feature of SIFT algorithm:
A) the SIFT feature is the local feature of image, and it changes rotation, scale, brightness and maintains the invariance, and visual angle change, affined transformation, noise are also kept to a certain degree stability.
B) unique good, quantity of information is abundant, is applicable in the magnanimity property data base and mates fast and accurately.
C) volume is even several objects of minority also can produce a large amount of SIFT proper vectors.
D) extensibility can be united with other forms of proper vector very easily.
1) generation of SIFT proper vector
Before generating the SIFT proper vector, earlier image is carried out normalization and handle, to look like be original twice to expanded view then, pre-filtering cancelling noise, the bottom that obtains gaussian pyramid i.e. the 1st layer of the 1st rank.The generating algorithm of piece image SIFT proper vector comprises following 4 steps:
A) the metric space extreme value detects
The metric space theory comes across computer vision field the earliest, its objective is the multiple dimensioned feature of simulated image data at that time.Koendetink utilizes diffusion equation to describe the metric space filtering subsequently, and proves that thus gaussian kernel is to realize unique transformation kernel of change of scale.People such as Lindeberg, Babaud proves further that by different derivations gaussian kernel is unique linear kernel.
The two dimension gaussian kernel definition as shown in Equation (12), wherein: σ has represented the variance of Gauss normal distribution:
G ( x , y , σ ) = 1 2 πσ 2 e - ( x 2 + y 2 ) / 2 σ 2 - - - ( 12 )
For two-dimensional image I (x, y), the metric space under different scale represent L (x, y, σ) can by image I (x, y) with gaussian kernel G (convolution σ) obtains for x, y, as shown in Equation (13):
L(x,y,σ)=G(x,y,σ)×I(x,y)(13)
In order to obtain the invariant feature point under the different scale space, the gaussian kernel under the image and the different scale factor is carried out convolution operation, constitute gaussian pyramid.Gaussian pyramid has the o rank, generally selects 4 rank, and there is s layer scalogram picture on each rank, and s generally selects 5 layers.The 1st layer of the 1st rank is to amplify 2 times original image, its objective is in order to obtain more unique point; The scale factor scale-up factor of adjacent two layers is k in single order, and then the scale factor on the 2nd layer on the 1st rank is k σ, and other layer then can by that analogy then; The 1st layer of middle layer scalogram by first rank on the 2nd rank looks like to carry out the son sampling and obtains, and its scale factor is k2 σ, and the 2nd of the 2nd rank the layer scale factor is that the 1st layer k doubly is k3 σ then.The 1st layer of middle layer scalogram by the 2nd rank on the 3rd rank looks like to carry out the son sampling and obtains.The formation on other rank by that analogy.
Gaussian pyramid will be set up DOG (Different of Gaussian) pyramid after setting up, and DOG is the poor of adjacent two metric space functions, promptly subtracts each other by adjacent metric space function in the gaussian pyramid, with D (x, y σ) represents, as shown in Equation (14):
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))×I(x,y)
=L(x,y,kσ)-L(x,y,σ)(14)
In the above in the DOG metric space pyramid of Jian Liing, for maximal value and the minimum value that detects the DOG space, each pixel in middle layer in the DOG metric space (bottom and top layer except) need follow with 9 neighbor pixels of adjacent 8 pixels of one deck and its last layer and following one deck altogether 26 neighbor pixels compare, to guarantee all to detect local extremum at metric space and two dimensional image space, if this pixel is all bigger or all little than the DOG value of adjacent 26 pixels, then this is named a person for a particular job as a Local Extremum, writes down its position and corresponding yardstick.
B) accurate location feature point position
Because the DOG value is responsive to noise and edge, therefore, detects Local Extremum in the above in the DOG metric space and also will could accurately orientate unique point as through further check.Below Local Extremum is carried out three-dimensional quadratic function and fit accurately to determine the position and the yardstick of unique point, (x, y is σ) at Local Extremum (x for metric space function D 0, y 0, the Taylor expansion of σ) locating as shown in Equation (15).
D ( x , y , σ ) = D ( x 0 , y 0 , σ 0 ) + ∂ D T ∂ X X + 1 2 X T ∂ 2 D ∂ X 0 2 X - - - ( 15 )
Single order in the formula (15) and second derivative are that the difference by near zone is similar to and obtains, and by to formula (15) differentiate, and to make it be 0, draws accurate extreme value place X Max, as shown in Equation (16);
X max = - [ ∂ 2 D ∂ X 2 ] - 1 ∂ D ∂ X - - - ( 16 )
In the exactly determined unique point, to remove the unique point and the unsettled skirt response point of low contrast simultaneously in the above, to strengthen coupling stability, to improve noise resisting ability.
Remove the unique point of low contrast: formula (16) in formula (15), is needed only preceding two, obtain formula (17):
D ( X max ) = D + 1 2 ∂ D T ∂ X - - - ( 17 )
Through type (17) calculates D (X Max), if | D (X Max) | 〉=0.03, then this unique point just remains, otherwise just abandons.
Remove unsettled skirt response point: extra large gloomy matrix as shown in Equation (18), partial derivative wherein is the partial derivative at top definite unique point place, it also is to come approximate evaluation by the difference of near zone.
H = D xx D xy D xy D yy - - - ( 18 )
Extra large gloomy matrix H by 2 * 2 is calculated principal curvatures, because the eigenwert of the principal curvatures of D and H matrix is proportional, does not specifically ask eigenwert, asks its ratio ratio.If α is the maximum amplitude feature, β is inferior little,
Figure BSA00000491614800121
Then ratio as shown in Equation (19).
Tr(H)=D xx+D vv=α+β
Det(H)=D xxD vv-(D xv) 2=αβ(19)
ratio = Tr ( H ) 2 Det ( H ) = ( α + β ) 2 αβ = ( r + 1 ) 2 r
Obtain ratio by formula (19), constant r=10, if Then keep this unique point, otherwise just abandon.
C) determine the principal direction of unique point
The gradient direction distribution character that utilizes the unique point neighborhood territory pixel is each unique point assigned direction parameter, makes operator possess rotational invariance, wherein m (x, y) and θ (x y) is respectively amplitude and direction.
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ ( x , y ) = tan - 1 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( L ( x + 1 , y ) - L ( x - 1 , y ) ) ) - - - ( 20 )
Be to sample in the neighborhood window at center with the unique point, and with the gradient direction of gradient orientation histogram statistics neighborhood territory pixel.The scope of histogram of gradients is 0 °~360 °, wherein per 10 ° of posts, 36 posts altogether.The peak value of gradient orientation histogram has then been represented the principal direction of this unique point place neighborhood gradient, promptly as the direction of this unique point.In gradient orientation histogram, when existing another to be equivalent to the peak value of main peak value 80% energy, then this direction is thought the auxilliary direction of this unique point.A unique point may designatedly have a plurality of directions (principal direction, auxilliary direction more than), and this can strengthen the robustness of coupling.
By the 3 top steps, the unique point of image has detected and has finished, and each unique point has 3 information: position, corresponding yardstick, direction.
C) generate the SIFT proper vector
At first coordinate axis is rotated to be the direction of unique point, to guarantee rotational invariance.
Next be that 8 * 8 window (row and column at unique point place is not got) is got at the center with the unique point.On per 4 * 4 image fritter, calculate the gradient orientation histogram of 8 directions then, draw the accumulated value of each gradient direction, form a seed points.Such unique point by 2 * 2 totally 4 seed points form, each seed points has 8 direction vector information, can produce 2 * 2 * 8 totally 32 data, the SIFT proper vectors that form 32 dimensions are the unique point describer, required video data block is 8 * 8.The thought of this neighborhood directivity information associating has strengthened the antimierophonic ability of algorithm, also provides fault-tolerance preferably for the characteristic matching that contains positioning error simultaneously.
In the actual computation process, in order to strengthen the robustness of coupling, to each unique point use 4 * 4 totally 16 seed points describe, each seed points has 8 direction vector information, just can produce 4 * 4 * 8 totally 128 data for a unique point like this, the final SIFT proper vector that forms 128 dimensions, required video data block is 16 * 16.The influence that this moment, the SIFT proper vector was removed geometry deformation factors such as dimensional variation, rotation continues the length normalization method with proper vector again, then can further remove the influence of illumination variation.
2) coupling of SIFT proper vector
A) similar decision metric
After the proper vector of two width of cloth images generates, can adopt the similar decision metric of the Euclidean distance of key point proper vector as key point in two width of cloth images.Formula is as follows:
d L = | L i - L l | = Σ k = 1 m ( L i , k - L l , k ) 2 - - - ( 21 )
Wherein: m is vectorial dimension, d LBe Euclidean distance.
B) matching process
The mistake match condition that causes in order to reduce a unique point may have a plurality of similar match points, the present invention adopts the mistake that recently reduces of arest neighbors and time neighbour's unique point distance to mate.If nearest distance and inferior near distance ratio, think then that this point is right to being match point less than certain threshold value Td, otherwise abandon.Reduce threshold value, the SIFT match point can reduce number, but more stable.How finding arest neighbors and time neighbour is the key issue of this algorithm.The method of exhaustion is an effective method the most.If but when the unique point number was big especially, calculated amount will increase with the index rank.
In view of the problem that exists in the matching algorithm, can adopt BBF (Best Bin First) to seek arest neighbors and time neighbour, it is the improvement to the k-d tree search algorithm.In fact, the k-d tree search algorithm most of the time spends in to be checked on the node, and only some node satisfies the arest neighbors condition, and therefore, the present invention adopts approximate nearest neighbor algorithm, shortens search time by restriction k-d tree middle period son node number.
The present invention utilizes SIFT characteristic matching algorithm to carry out the generation and the SIFT characteristic matching of SIFT proper vector, rotation, scale, brightness is changed maintaining the invariance, and visual angle change, affined transformation, noise are also kept to a certain degree stability.
In order to find match point right fast, the present invention is reduced to 0.20 with matching threshold, and the point that matches like this is to reducing, but more stable.
Suitable increase matching threshold, the point that matches can increase, but stability meeting decline when matching threshold is 0.49, is mated the showed increased of counting, but is wherein had some points to match mistake.To choose appropriate threshold in the practical application and carry out stable coupling.
(2) based on the particle filter tracking algorithm of SIFT characteristic matching
The present invention proposes a kind of particle filter tracking technology, utilize well " multimodal " predictability of particle filter, estimate the scope of activities of target, in this scope, calculate the SIFT feature then and mate, last weighting output based on the SIFT characteristic matching.
Here the present invention adopts rectangle frame to represent target attitude equally.Target is at k 0Original state (TX constantly Mit, TY Mit, θ Mit, SX Mit, SY Mit) with aforementioned the same, no longer repeat here.
As space is limited, just do not give unnecessary details one by one based on processes such as the target priori of the particle filter tracking system of SIFT characteristic matching, state transitions, systematic observation, posterior probability calculating, particle resamplings, referring to analysis and the description in the trifle on this instructions " based on the particle filter tracking algorithm of gray scale template zone coupling ".
Based on the particle filter tracking algorithm flow chart of SIFT characteristic matching as shown in Figure 5.
In tracing process, at first need to determine number of particles and motion model, in the present invention, we select affine model with five parameters motion model, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.Motion model is in case selected, and particle is promptly consistent with it, has the parameter of same dimension.
To judge whether that then target chooses, the choosing can adopt equally of target manually chosen or chosen automatically, after the target area is determined, just sets up the SIFT proper vector of To Template, initialization particle parameter, and the particle weights all are made as 1 (promptly all particles are of equal importance).
Frame after the particle initialization will change the iterative process of particle filter algorithm over to.In each frame, at first carrying out particle disseminates, just to predict, in a scope of prediction, calculate the SIFT feature of image in this zone, then the SIFT feature of the template that has obtained is mated with the SIFT feature in this estimation range, the point that preservation matches is right, produce a match point to figure according to the systematic observation state, each particle is carried out systematic observation at this match point on to figure, calculate the weights of particle, and all particles are weighted estimated value with the export target state.Carry out particle resampling process at last, change algorithm iteration process next time over to.
(2) design of hardware and software of Target Tracking System and realization
The present invention adopts BL-E854CB type network audio-video video camera, DR68 series The Cloud Terrace and perseverance to recall video frequency collection card to constitute a cover vision track system, on PC in conjunction with VC++6.0 realized based on gray scale template zone coupling particle filter tracking, based on the particle filter tracking of color probability distribution, based on the particle filter tracking of SIFT characteristic matching, realize moving target from motion tracking.
1. the hardware design of system and formation
Native system is divided into video acquisition module, track algorithm computing module and output control module.Wherein video acquisition module is recalled video frequency collection card by BL-E854CB type network audio-video video camera and perseverance and is constituted, and follows the tracks of computing module and is made of four kinds of track algorithms, and output control module is made of DR68 series The Cloud Terrace, and steering order is sent by the PC serial ports.
(1) BL-E854CB type network audio-video video camera
Day and night type network audio-video video camera BL-E854CB adopts 1/3 " SONY SUPER HAD CCD and DSP Digital Signal Processing, the image of high-quality and good performance can be provided.Sharpness reaches 540TVL, 600TVL when changeing black and white, and minimal illumination reaches 0.1Lux/F1.2,0.001/F1.2 when changeing black and white.
(2) perseverance is recalled video frequency collection card
HighEasy series coding card has adopted high performance audio/video encoding/decoding technology, rely on hardware to realize the real-time coding and the precise synchronization of video and audio frequency fully, realized the control of dynamic code rate control, constant code rate, mixed Rate Control, frame per second is controlled, frame pattern is optional, the dynamic image quality control, video-losing is reported to the police, the output of multi-channel analog video, functions such as multiple alarm signal output configuration.HighEasy series encoding and decoding product provides integrated system SDK, network SDK and decoding SDK, can be for using development system use post-partum period.
(3) DR68 series The Cloud Terrace
0 °~350 ° of the maximum rotating ranges in DR68 series The Cloud Terrace held water plane, downward 90 ° of surface level, upwards 60 ° of motions, rotational speed is 6 °/s of level, 3.5 °/s of vertical rotation supports Pelco D and two kinds of agreements of Pelco P, is furnished with the RS-485 serial ports, support 2400BPS and two kinds of baud rates of 9600BPS, can set by toggle switch.The native system baud rate adopts 2400BPS, Pelco P agreement.
The pictorial diagram of system hardware equipment as shown in Figure 6, hardware connects block diagram as shown in Figure 1.
2. the software of system is realized
(1) image acquisition
Utilize the permanent SDK that capture card provides that recalls, realize the collection of image, at first capture card is done some initialized work, by finishing with minor function.
Integrated circuit board initialization function:
1. MP4Sys_SetDisplayMode (FALSE); Display mode is arranged to the YUV mode.
2. MP4Sys_InitDSPs (); The integrated circuit board initialization.
3. MP4Sys_ChannelOpen (0); Open acquisition channel, adopt the O passage.
4. MP4Sys_EncSetOriginalImageSize (hChannelHandle, 352,288); Dimension of picture is set, and native system is arranged to 288 * 352 sizes to picture.
The image acquisition function:
1. MP4Sys_GetOriginalImageEx (hChannelHandle, ImageBuf , ﹠amp; Size , ﹠amp; DwWidth , ﹠amp; DwHeight); Acquisition original image, original image are yuv format.
2. MP4Sys_SaveYUVToBmp (rgb, ImageBuf , ﹠amp; Rgbsize, dwWidth, dwHeight); Yuv format changes the BMP form.
(2) target is extracted
Native system adopts two kinds of target extracting modes, extracts automatically and manual extraction.Automatically extracting is to say at moving object in the visual field, when in the visual field of gathering moving object being arranged, then this moving object as the target of being extracted, follow the tracks of then.Manual extraction then is to utilize mouse to select a zone, then this piece zone is followed the tracks of.Be introduced respectively below.
1) automatic target extracts
Automatic target extract to be exactly when having moving object to come in the visual field, just this moving object is extracted template as follow-up tracking.
Will judge at first whether moving object is arranged in the visual field, this can utilize perseverance that the motion detection function that provides in the video frequency collection card is provided and obtain.Perseverance is recalled video frequency collection card provides as minor function to call detecting whether moving target is arranged in the visual field.
1. int MP4Sys_SetupMotionDetection (HANDLE hChannelHandle, RECT*rectList, intnumberOfAreas): surveyed area is set, for extraction moving target that can be complete, surveyed area is not arranged to the entire image size in the native system, but to be arranged to the picture centre be the center, length is 300, and wide is 220 rectangular area, like this, can guarantee can on image boundary, not extract object, thereby guarantee that the target that target is extracted is complete.
2. int MP4Sys_AdjustMotionDetectPrecision (HANDLE hChannelHandle, int iGrade, intiFastMotionDetectFps, int iSlowMotionDetectFps): adjust sensitivity for analysis, disturb the erroneous judgement that causes disconnected in order to remove, native system is hanging down a bit that sensitivity is transferred.Only just can respond to large-area moving region.
3. int MP4Sys StartMotionDetection (HANDLE hChannelHandle): start motion analysis
4. int MP4Sys_MotionAnalyzer (HANDLE hChannelHandle, char*MotionData, intiThreshold, int*iResult): motion analysis, iResult are its rreturn values, when iResult greater than zero the time, the proof motion analysis has moving target in the zone
5. int MP4Sys_StopMotionDetection (HANDLE hChannelHandle): the closing movement analysis, when in finding the motion analysis zone, moving target being arranged,, carry out target then and extract with regard to the closing movement analysis.
Native system adopts a kind of algorithm of comparative maturity to build background, i.e. Gaussian Background modeling.Utilize the Gaussian Background MBM that provides in the OpenCV kit to obtain containing the prospect frame bianry image that comprises the target area of few noise, carrying out profile then extracts, the profile of getting maximum area as detect target, locate then, promptly obtain comprising the rectangular area of target, the calculation of parameter in this piece zone is come out, promptly the initialization of selected rectangular area, R=(Cx, Cy, theta, Sx, Sy), Cx, Cy are the center of rectangular area, Sx, Sy is the width and the height of rectangular area, and theta is the rectangular area initial angle, is made as 90 °.
2) manually target is extracted
Manually target is extracted exactly according to the target of wanting to follow the tracks of, and chooses a rectangular area in image, need come out the calculation of parameter in this piece zone equally, promptly the initialization of selected rectangular area, R=(Cx, Cy, theta, Sx, Sy), Cx, Cy is the center of rectangular area, and Sx, Sy are the width and the height of rectangular area, theta is the rectangular area initial angle, is made as 90
(3) target following
In the target following, three kinds of particle filter tracking algorithms that adopt the present invention to propose.
In the particle filter tracking method, the major parameter that can regulate has two: particle is propagated radius R and number of particles N.And particle propagation radius is relevant with the movement velocity of target, and the movement velocity of target is big, then needs bigger particle to propagate radius, needs more particle to reach tenacious tracking simultaneously; The movement velocity of target is little, then can propagate a little bit smaller that radius establishes to particle, can reduce number of particles simultaneously, to reduce operand.There are following three kinds of situations between the movement velocity S of particle propagation radius R and target:
(1) R<S: the velocity of propagation of particle is obviously less than the movement velocity of target on picture at this moment, and all particles all will lag behind target, follows the tracks of failure.
(2) R>>S: to propagate radius very big for particle in this case, and in the region of search that all particles constitute will be easy to target is included in, but the area that particle covers increases and means that also population reduces in the unit area, and search resolution reduces.
(3) R>S but R!<<S: get R<S or R>>S is irrational.R should be greater than S, but can not be too big.Because R represents particle and propagates radius, characterize with pixel here, and S represents target speed, also can pixel characterize, and thinks that here both are approximated to proportional relation.Work as Rx=2.1Sx, during Ry=2.2Sy, (nothing is blocked interference) tracking should be successful in the ideal case.But consider the situation of blocking, choose R=2.5S.
In the relation of number of particles and tracking effect, number of particles very little, then tracking accuracy is not enough, number of particles is too many, then causes the waste of repetition, when number of particles N=0.3~0.5Ns (during Ns=2Sx * 2Sy), follows the tracks of average error in 0.1 pixel.
Native system is according to the move the camera to follow the subject's movement general distance of camera of the processing speed of CPU and target, choosing particle propagation radius is 20 pixels, can be fit to follow the tracks of the middling speed moving target fully, population N=0.5 * 40 * 40=800, through experiment, the last target that can follow the tracks of fully.
For the target area of extracting, select according to tracking mode respectively, obtain three kinds of templates, i.e. target gray scale template, color of object histogram and target SIFT feature have so just obtained the trace template of target.At corresponding tracking mode, carry out particle filter tracking then,,, be used for tracking Control the output of the posterior probability of particle filter to each two field picture.
(4) tracking Control
Tracking Control is exactly to utilize the center of the position of following the tracks of resulting target, as the steering order that sends to The Cloud Terrace.The output form of its steering order is with reference to the PELCO-P protocol, baud rate is made as 2400BPS, adopt the MSComm control that provides in the VC++6.0 to export the serial ports steering order, the control The Cloud Terrace can eight direction (upper and lower, left and right, upper left, lower-left, upper right, bottom right) motion.Its control law is as follows: according to the physical location of following the tracks of resulting target, judge that whether (native system is decided to be a rectangle frame to this to target among a small circle among a small circle at one of the central area, its length and width are respectively 30 pixels, the center of rectangle frame is at the center of image), if not in the rectangle frame of defined, then according to the zone at the output valve place of particle posterior probability, the instruction of transmission control corresponding, try hard to target is moved to the central area of image, realized that promptly video camera following the motion of object.The zone of its division is as shown in Figure 7:
Central area: promptly try hard to make the zone of target arrival, in the system it is located the rectangular area of 30 * 30 pixel units.
One district: promptly upper left district, when target in this when zone, should move target to central area, like this, video camera just should past upper left side to walking about, promptly send the upper left side to movement instruction.
Two districts: promptly go up the district,, should move target to central area when target during in this zone, like this, video camera just up direction walk about, promptly send upward direction movement instruction.
The zone explanation in three districts to eight districts and control mode are caught up with and are stated similarly, promptly just to send at which zone the corresponding control command of direction therewith in.
(5) overall system block diagram
Native system has realized that choosing target for two kinds chooses mode (choose automatically and manually choose), two kinds of tracking modes (particle filter and Camshift follow the tracks of), wherein Camshift is that the built-in function that the OpenCV storehouse of adopting provides realizes that particle filter divides three kinds of modes (as previously mentioned) again.The interface of manual control The Cloud Terrace is provided simultaneously, has also attempted network control.
Software is based on that VC++6.0 writes, a main thread and three sub-threads, wherein the task of main thread is to do user interactions, the main task of sub-thread one is to realize track algorithm and export controlled quentity controlled variable, the main task of sub-thread two is to realize the transmission of teledata, and the main task of sub-thread three is requests of receiving remote machine.The transmission of teledata and reception utilize SOCKET to programme and realize, adopt the UDP host-host protocol.
Here, we that PC that has video frequency collection card as server end, another machine is as client, its workflow is as follows: when server end is agreed transmission, if the client application connects, then connect foundation, the client and server end carries out the initialization of socket separately, server end just carries out the transmission of data then, and the request of real-time response client, the customer side then shows the data that server end transmits, and the request command of the cradle head control of server end transmission in the past.
The software initial interface is as shown in Figure 8:
Under the default situations, the aims of systems extracting mode is a closed condition, and tracking mode is the particle filter tracking based on gray scale template zone coupling, and remote transmission is a closed condition.
Server end is mainly finished the request of target following task and customer in response end, and its four threads are described below respectively.
Main thread is mainly finished the user interactions task, and the work of choosing of target, its process flow diagram such as accompanying drawing 9 are provided.
Sub-thread one is mainly finished four kinds of track algorithms and is exported controlled quentity controlled variable, its program flow diagram such as accompanying drawing 10.
Sub-thread two is mainly finished the transmission of teledata, and sub-thread three is mainly finished the request of receiving remote machine, and carries out control corresponding according to value request, and its program circuit respectively as accompanying drawing 11 (a) and (b)
Client needs two threads equally, and one is used for sending, and one is used for receiving, only client need at first be initiated connection request, after connection was set up, the working method at two ends was the same, so just need not provide the program flow diagram of client more here.

Claims (5)

1. based on the particle filter tracking algorithm of gray scale template zone coupling, its feature comprises:
A, the gray scale template of mating in the tracking with traditional area are the goal description form, specifically comprise:
The priori of A1, target comprises the foundation of the gray scale template of target;
A2, in initial frame, detect or the method for man-machine interaction obtains the initial description of target with automatic targets such as method of difference;
A3, employing rectangle frame are represented target attitude;
A4, for a rectangle frame, its attitude comprises five parameters: the center level and the vertical coordinate of rectangle frame, the height of rectangle frame and width, rectangle frame is along the angle of short transverse and transverse axis;
B, Region Matching Algorithm and particle filter tracking method are combined, specifically comprise:
B1, represent the estimated value of target component with the particle weighted sum, the weights and the matching value of particle are proportional.
The selection of B2, population is relevant with actual tracer request, and population is many more, follows the tracks of stablely more, and precision is high more, but calculated amount is also big more simultaneously.Carry out compromise selection, dynamically adjust according to the tracking situation.
In B3, each frame, each particle carried out system state shifts and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state, carry out particle resampling process then.
C, carry out kinematic parameter search, specifically comprise at affine space:
C1, motion model select the affine model that has the translation model of two parameters and have five parameters, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.
C2, particle are consistent with motion model, have same dimension.
2. based on the particle filter tracking algorithm of color probability distribution, its feature comprises:
The statistics of A, color histogram comprises:
A1, the pixel of target area is transformed into the HSV space by rgb space, calculates the one dimension histogram of H component then, and carry out normalization;
A2, H component are divided into several little intervals equably, make each minizone become a histogrammic bin;
The scope of A3, the requirement of considering the vision track real-time and H component, the number of H component bin is made as 48 in the system.
The calculating of B, color probability distribution graph comprises:
B1, establish a threshold value, when the S in HSV space component during less than this threshold value, the corresponding H component of order is 0, and the present invention is made as 30 with threshold value;
B2, color probability distribution graph through revising, again all negates of the value of each pixel, at last the pixel value of color probability distribution image be zero point replace to pixel value be one very little on the occasion of so that particle filter can be good at being adapted to dimensional variation when calculating weights.
C, color probability distribution and particle filter tracking method are combined, specifically comprise:
C1, from as can be known to the analysis of color probability distribution graph, the pixel close with color of object has bigger probable value, and the pixel value at target place has maximum probable value.Under the situation that the target priori has obtained, disseminate the cluster particle with initial position, then these particles are observed, to each particle, calculate in this particle state scope pixel value and, modulate and normalization just can obtain the weights of each particle with exponential function, can see, the particle approaching more with dbjective state, weights can be big more.
C2, adopt the color probability Distribution Model, be equivalent to original image has been mapped in the probability gray level image, therefore, under the color of object jumping characteristic is not very big situation, to the edge block, target rotation, distortion and background motion be insensitive.
C3, adopt affine model to follow the tracks of, the purpose of tracking then is to find the solution the motion state of target,, comprise target's center, the target rotation of x direction and y direction angle,, target is in x direction, y direction with to the yardstick of angular direction.
C4, motion model select the affine model that has the translation model of two parameters and have five parameters, comprise displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.
C5, particle are consistent with motion model, have same dimension.
3. based on the particle filter tracking algorithm of SIFT characteristic matching, its feature comprises:
The calculating of A, SIFT characteristic matching comprises:
The generation of A1, SIFT proper vector
Earlier image is carried out normalization and handle, to look like be original twice to expanded view then, and pre-filtering cancelling noise, the bottom that obtains gaussian pyramid i.e. the 1st layer of the 1st rank; Again through the detection of metric space extreme value, accurate location feature point position, the principal direction of determining unique point, four calculation procedures of generation SIFT proper vector.
The coupling of A2, SIFT proper vector
The mistake match condition that causes in order to reduce a unique point may have a plurality of similar match points, the present invention adopts the mistake that recently reduces of arest neighbors and time neighbour's unique point distance to mate.
A3, a kind of particle filter tracking technology based on the SIFT characteristic matching has been proposed
In tracing process, at first determine number of particles and motion model, in the present invention, motion model is selected affine model with five parameters, comprises displacement and the yardstick and the anglec of rotation of horizontal direction and vertical direction.After motion model was selected, particle was consistent with it, had the parameter of same dimension.
Judge whether that then target chooses, dual mode is manually chosen or is chosen automatically in the employing of choosing of target, after the target area is determined, just sets up the SIFT proper vector of To Template, initialization particle parameter, and the particle weights all are made as 1 (promptly all particles are of equal importance).
Frame after the particle initialization just changes the iterative process of particle filter algorithm over to.In each frame, at first carrying out particle disseminates, just predict, in a scope of prediction, calculate the SIFT feature of image in this zone, then the SIFT feature of the template that has obtained is mated with the SIFT feature in this estimation range, the point that preservation matches is right, produce a match point to figure according to the systematic observation state, each particle is carried out systematic observation at this match point on to figure, calculate the weights of particle, and all particles are weighted estimated value with the export target state.Carry out particle resampling process at last, change algorithm iteration process next time over to.
4. the hardware design of Target Tracking System and realization, its feature comprises:
This tracker is divided into video acquisition module, track algorithm computing module and output control module.The present invention adopts BL-E854CB type network audio-video video camera, DR68 series The Cloud Terrace, perseverance to recall video frequency collection card and PC constitutes a cover vision track hardware system.
A, type BL-E854CB type network audio-video video camera day and night can provide the image of high-quality and good performance.
B, perseverance are recalled video frequency collection card, HighEasy series coding card has adopted high performance audio/video encoding/decoding technology, rely on hardware to realize the real-time coding and the precise synchronization of video and audio frequency fully, realized the control of dynamic code rate control, constant code rate, mixed Rate Control, frame per second is controlled, frame pattern is optional, the dynamic image quality control, video-losing is reported to the police, the output of multi-channel analog video, functions such as multiple alarm signal output configuration.
0 °~350 ° of the maximum rotating ranges in C, DR68 series The Cloud Terrace held water plane, downward 90 ° of surface level, upwards 60 ° of motions, rotational speed is 6 °/s of level, vertically rotates 3.5 °/s, baud rate of the present invention adopts 2400BPS, Pelco P agreement.
5. the software design of Target Tracking System and realization, its feature comprises:
A, to perseverance recall capture card by initialisation image work after, finish collection.
B, native system adopt and extract automatically and two kinds of target extracting modes of manual extraction, specifically comprise:
B1, when having moving object to come in the visual field, extract by automatic target this moving object extracted template as follow-up tracking.Native system adopts the Gaussian Background modeling, obtains containing the prospect frame bianry image that comprises the target area of few noise, carries out profile then and extracts, the profile of getting maximum area as detect target, location then obtains comprising the rectangular area of target
B2, manual target are extracted exactly according to the target of wanting to follow the tracks of, and choose a rectangular area in image, the calculation of parameter in this piece zone are come out, promptly the initialization of selected rectangular area.
C, three kinds of particle filter tracking algorithms that adopt the present invention to propose carry out target following, specifically comprise:
C1, in the particle filter tracking method, the major parameter of adjusting has two: particle is propagated radius R and number of particles N.And particle propagation radius is relevant with the movement velocity of target, and the movement velocity of target is big, then needs bigger particle to propagate radius, needs more particle to reach tenacious tracking simultaneously; The movement velocity of target is little, then can propagate a little bit smaller that radius establishes to particle, can reduce number of particles simultaneously, to reduce operand.
Native system is according to the move the camera to follow the subject's movement general distance of camera of the processing speed of CPU and target, and choosing particle, to propagate radius be 20 pixels, can be fit to follow the tracks of the middling speed moving target fully, population N=0.5 * 40 * 40=800, the last target that can follow the tracks of fully.
C2, for the target area of extracting, select according to tracking mode respectively, obtain three kinds of templates, i.e. target gray scale template, color of object histogram and target SIFT feature have so just obtained the trace template of target.
The center of the position of resulting target is followed the tracks of in D, utilization, as the steering order that sends to The Cloud Terrace, realizes tracking Control.The output form of steering order adopts the PELCO-P agreement, and baud rate is made as 2400BPS, the control The Cloud Terrace can eight direction (upper and lower, left and right, upper left, lower-left, upper right, bottom right) motion.
E, software systems overall design comprise:
E1, native system have realized that choosing target for two kinds chooses mode (choose automatically and manually choose), two kinds of tracking modes (particle filter and Camshift follow the tracks of), wherein Camshift is that the built-in function that the OpenCV storehouse of adopting provides is realized, particle filter divides three kinds of modes (as previously mentioned) again, has also realized network control
E2, software are write based on VC++6.0, a main thread and three sub-threads, wherein the task of main thread is to do user interactions, the main task of sub-thread one is to realize track algorithm and export controlled quentity controlled variable, the main task of sub-thread two is to realize the transmission of teledata, and the main task of sub-thread three is requests of receiving remote machine.The transmission of teledata and reception utilize SOCKET to programme and realize, adopt the UDP host-host protocol.
E3, have video frequency collection card that PC as server end, another machine is as client, realizes network control.
CN2011101189182A 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering Pending CN102184551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101189182A CN102184551A (en) 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101189182A CN102184551A (en) 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Publications (1)

Publication Number Publication Date
CN102184551A true CN102184551A (en) 2011-09-14

Family

ID=44570720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101189182A Pending CN102184551A (en) 2011-05-10 2011-05-10 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Country Status (1)

Country Link
CN (1) CN102184551A (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509309A (en) * 2011-11-04 2012-06-20 大连海事大学 Image-matching-based object-point positioning system
CN102592135A (en) * 2011-12-16 2012-07-18 温州大学 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN102592290A (en) * 2012-02-16 2012-07-18 浙江大学 Method for detecting moving target region aiming at underwater microscopic video
CN102663419A (en) * 2012-03-21 2012-09-12 江苏视软智能系统有限公司 Pan-tilt tracking method based on representation model and classification model
CN102819263A (en) * 2012-07-30 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN102881012A (en) * 2012-09-04 2013-01-16 上海交通大学 Vision target tracking method aiming at target scale change
CN103136762A (en) * 2011-11-29 2013-06-05 南京理工大学常熟研究院有限公司 Dynamic image target tracking method
CN103646407A (en) * 2013-12-26 2014-03-19 中国科学院自动化研究所 Video target tracking method based on ingredient and distance relational graph
CN103870815A (en) * 2014-03-24 2014-06-18 公安部第三研究所 Mancar structural description method and system for dome camera video monitoring
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN103985141A (en) * 2014-05-28 2014-08-13 西安电子科技大学 Target tracking method based on HSV color covariance characteristics
CN104200495A (en) * 2014-09-25 2014-12-10 重庆信科设计有限公司 Multi-target tracking method in video surveillance
CN104200226A (en) * 2014-09-01 2014-12-10 西安电子科技大学 Particle filtering target tracking method based on machine learning
CN104766054A (en) * 2015-03-26 2015-07-08 济南大学 Vision-attention-model-based gesture tracking method in human-computer interaction interface
CN105592315A (en) * 2015-12-16 2016-05-18 深圳大学 Video characteristic redundant information compression method and system based on video space-time attribute
CN105989615A (en) * 2015-03-04 2016-10-05 江苏慧眼数据科技股份有限公司 Pedestrian tracking method based on multi-feature fusion
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106203449A (en) * 2016-07-08 2016-12-07 大连大学 The approximation space clustering system of mobile cloud environment
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
CN106709456A (en) * 2016-12-27 2017-05-24 成都通甲优博科技有限责任公司 Computer vision-based unmanned aerial vehicle target tracking box initialization method
CN107121893A (en) * 2017-06-12 2017-09-01 中国科学院上海光学精密机械研究所 Photoetching projection objective lens thermal aberration on-line prediction method
CN108198199A (en) * 2017-12-29 2018-06-22 北京地平线信息技术有限公司 Moving body track method, moving body track device and electronic equipment
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN108563220A (en) * 2018-01-29 2018-09-21 南京邮电大学 The motion planning of apery Soccer robot
CN108833919A (en) * 2018-06-29 2018-11-16 东北大学 Colored single pixel imaging method and system based on random rotation matrix
CN109323697A (en) * 2018-11-13 2019-02-12 大连理工大学 A method of particle fast convergence when starting for Indoor Robot arbitrary point
CN109600710A (en) * 2018-12-10 2019-04-09 浙江工业大学 Multi-movement target monitoring method based on difference algorithm in a kind of video sensor network
CN109801279A (en) * 2019-01-21 2019-05-24 京东方科技集团股份有限公司 Object detection method and device, electronic equipment, storage medium in image
CN109872343A (en) * 2019-02-01 2019-06-11 视辰信息科技(上海)有限公司 Weak texture gestures of object tracking, system and device
CN109881604A (en) * 2019-02-19 2019-06-14 福州市极化律网络科技有限公司 Mixed reality guardrail for road shows adjustment system
CN109903281A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 It is a kind of based on multiple dimensioned object detection method and device
CN109949340A (en) * 2019-03-04 2019-06-28 湖北三江航天万峰科技发展有限公司 Target scale adaptive tracking method based on OpenCV
CN110135577A (en) * 2018-02-09 2019-08-16 宏达国际电子股份有限公司 The device and method of the full Connection Neural Network of training
CN110298330A (en) * 2019-07-05 2019-10-01 东北大学 A kind of detection of transmission line polling robot monocular and localization method
CN110503665A (en) * 2019-08-22 2019-11-26 湖南科技学院 A kind of target tracking algorism improving Camshift
CN111050059A (en) * 2018-10-12 2020-04-21 黑快马股份有限公司 Follow shooting system with picture stabilizing function and follow shooting method with picture stabilizing function
CN111427381A (en) * 2019-12-31 2020-07-17 天嘉智能装备制造江苏股份有限公司 Control method for following work of small-sized sweeping machine based on dressing identification of operator
CN111526335A (en) * 2020-05-03 2020-08-11 杭州晶一智能科技有限公司 Target tracking algorithm for suspended track type omnidirectional pan-tilt camera
CN112200829A (en) * 2020-09-07 2021-01-08 慧视江山科技(北京)有限公司 Target tracking method and device based on correlation filtering method
CN112243082A (en) * 2019-07-17 2021-01-19 百度时代网络技术(北京)有限公司 Tracking shooting method and device, electronic equipment and storage medium
CN112364865A (en) * 2020-11-12 2021-02-12 郑州大学 Method for detecting small moving target in complex scene
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN114330501A (en) * 2021-12-01 2022-04-12 南京航空航天大学 Track pattern recognition method and equipment based on dynamic time warping
CN115082441A (en) * 2022-07-22 2022-09-20 山东微山湖酒业有限公司 Retort material tiling method in wine brewing distillation process based on computer vision
CN117746076A (en) * 2024-02-19 2024-03-22 成都航空职业技术学院 Equipment image matching method based on machine vision

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509309B (en) * 2011-11-04 2013-12-18 大连海事大学 Image-matching-based object-point positioning system
CN102509309A (en) * 2011-11-04 2012-06-20 大连海事大学 Image-matching-based object-point positioning system
CN103136762A (en) * 2011-11-29 2013-06-05 南京理工大学常熟研究院有限公司 Dynamic image target tracking method
CN102592135A (en) * 2011-12-16 2012-07-18 温州大学 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN102592135B (en) * 2011-12-16 2013-12-18 温州大学 Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN102592290A (en) * 2012-02-16 2012-07-18 浙江大学 Method for detecting moving target region aiming at underwater microscopic video
CN102663419A (en) * 2012-03-21 2012-09-12 江苏视软智能系统有限公司 Pan-tilt tracking method based on representation model and classification model
CN102819263A (en) * 2012-07-30 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN102819263B (en) * 2012-07-30 2014-11-05 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN102881012A (en) * 2012-09-04 2013-01-16 上海交通大学 Vision target tracking method aiming at target scale change
CN102881012B (en) * 2012-09-04 2016-07-06 上海交通大学 Visual target tracking method for target scale change
CN103646407A (en) * 2013-12-26 2014-03-19 中国科学院自动化研究所 Video target tracking method based on ingredient and distance relational graph
CN103646407B (en) * 2013-12-26 2016-06-22 中国科学院自动化研究所 A kind of video target tracking method based on composition distance relation figure
CN103870815A (en) * 2014-03-24 2014-06-18 公安部第三研究所 Mancar structural description method and system for dome camera video monitoring
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN103985141A (en) * 2014-05-28 2014-08-13 西安电子科技大学 Target tracking method based on HSV color covariance characteristics
CN104200226A (en) * 2014-09-01 2014-12-10 西安电子科技大学 Particle filtering target tracking method based on machine learning
CN104200226B (en) * 2014-09-01 2017-08-25 西安电子科技大学 Particle filter method for tracking target based on machine learning
CN104200495B (en) * 2014-09-25 2017-03-29 重庆信科设计有限公司 A kind of multi-object tracking method in video monitoring
CN104200495A (en) * 2014-09-25 2014-12-10 重庆信科设计有限公司 Multi-target tracking method in video surveillance
CN105989615A (en) * 2015-03-04 2016-10-05 江苏慧眼数据科技股份有限公司 Pedestrian tracking method based on multi-feature fusion
CN104766054A (en) * 2015-03-26 2015-07-08 济南大学 Vision-attention-model-based gesture tracking method in human-computer interaction interface
CN105592315A (en) * 2015-12-16 2016-05-18 深圳大学 Video characteristic redundant information compression method and system based on video space-time attribute
CN106097388A (en) * 2016-06-07 2016-11-09 大连理工大学 In video frequency object tracking, target prodiction, searching scope adaptive adjust and the method for Dual Matching fusion
CN106097388B (en) * 2016-06-07 2018-12-18 大连理工大学 The method that target prodiction, searching scope adaptive adjustment and Dual Matching merge in video frequency object tracking
CN106203449A (en) * 2016-07-08 2016-12-07 大连大学 The approximation space clustering system of mobile cloud environment
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
CN106709456A (en) * 2016-12-27 2017-05-24 成都通甲优博科技有限责任公司 Computer vision-based unmanned aerial vehicle target tracking box initialization method
CN106709456B (en) * 2016-12-27 2020-03-31 成都通甲优博科技有限责任公司 Unmanned aerial vehicle target tracking frame initialization method based on computer vision
CN107121893B (en) * 2017-06-12 2018-05-25 中国科学院上海光学精密机械研究所 Photoetching projection objective lens thermal aberration on-line prediction method
CN107121893A (en) * 2017-06-12 2017-09-01 中国科学院上海光学精密机械研究所 Photoetching projection objective lens thermal aberration on-line prediction method
CN108198199A (en) * 2017-12-29 2018-06-22 北京地平线信息技术有限公司 Moving body track method, moving body track device and electronic equipment
CN108563220A (en) * 2018-01-29 2018-09-21 南京邮电大学 The motion planning of apery Soccer robot
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN110135577A (en) * 2018-02-09 2019-08-16 宏达国际电子股份有限公司 The device and method of the full Connection Neural Network of training
CN108833919A (en) * 2018-06-29 2018-11-16 东北大学 Colored single pixel imaging method and system based on random rotation matrix
CN108833919B (en) * 2018-06-29 2020-02-14 东北大学 Color single-pixel imaging method and system based on random circulant matrix
CN111050059A (en) * 2018-10-12 2020-04-21 黑快马股份有限公司 Follow shooting system with picture stabilizing function and follow shooting method with picture stabilizing function
CN109323697A (en) * 2018-11-13 2019-02-12 大连理工大学 A method of particle fast convergence when starting for Indoor Robot arbitrary point
CN109323697B (en) * 2018-11-13 2022-02-15 大连理工大学 Method for rapidly converging particles during starting of indoor robot at any point
CN109600710B (en) * 2018-12-10 2020-10-30 浙江工业大学 Multi-moving-target monitoring method based on difference algorithm in video sensor network
CN109600710A (en) * 2018-12-10 2019-04-09 浙江工业大学 Multi-movement target monitoring method based on difference algorithm in a kind of video sensor network
CN109801279A (en) * 2019-01-21 2019-05-24 京东方科技集团股份有限公司 Object detection method and device, electronic equipment, storage medium in image
CN109801279B (en) * 2019-01-21 2021-02-02 京东方科技集团股份有限公司 Method and device for detecting target in image, electronic equipment and storage medium
CN109872343A (en) * 2019-02-01 2019-06-11 视辰信息科技(上海)有限公司 Weak texture gestures of object tracking, system and device
CN109881604B (en) * 2019-02-19 2022-08-09 福州市极化律网络科技有限公司 Mixed reality road isolated column display adjustment system
CN109881604A (en) * 2019-02-19 2019-06-14 福州市极化律网络科技有限公司 Mixed reality guardrail for road shows adjustment system
CN109903281A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 It is a kind of based on multiple dimensioned object detection method and device
CN109949340A (en) * 2019-03-04 2019-06-28 湖北三江航天万峰科技发展有限公司 Target scale adaptive tracking method based on OpenCV
CN110298330B (en) * 2019-07-05 2023-07-18 东北大学 Monocular detection and positioning method for power transmission line inspection robot
CN110298330A (en) * 2019-07-05 2019-10-01 东北大学 A kind of detection of transmission line polling robot monocular and localization method
CN112243082B (en) * 2019-07-17 2022-09-06 百度时代网络技术(北京)有限公司 Tracking shooting method and device, electronic equipment and storage medium
CN112243082A (en) * 2019-07-17 2021-01-19 百度时代网络技术(北京)有限公司 Tracking shooting method and device, electronic equipment and storage medium
CN110503665A (en) * 2019-08-22 2019-11-26 湖南科技学院 A kind of target tracking algorism improving Camshift
CN111427381A (en) * 2019-12-31 2020-07-17 天嘉智能装备制造江苏股份有限公司 Control method for following work of small-sized sweeping machine based on dressing identification of operator
CN111526335B (en) * 2020-05-03 2021-08-27 金华精研机电股份有限公司 Target tracking method for suspended track type omnidirectional pan-tilt camera
CN111526335A (en) * 2020-05-03 2020-08-11 杭州晶一智能科技有限公司 Target tracking algorithm for suspended track type omnidirectional pan-tilt camera
CN112200829A (en) * 2020-09-07 2021-01-08 慧视江山科技(北京)有限公司 Target tracking method and device based on correlation filtering method
CN112364865A (en) * 2020-11-12 2021-02-12 郑州大学 Method for detecting small moving target in complex scene
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN114330501A (en) * 2021-12-01 2022-04-12 南京航空航天大学 Track pattern recognition method and equipment based on dynamic time warping
CN114330501B (en) * 2021-12-01 2022-08-05 南京航空航天大学 Track pattern recognition method and equipment based on dynamic time warping
CN115082441A (en) * 2022-07-22 2022-09-20 山东微山湖酒业有限公司 Retort material tiling method in wine brewing distillation process based on computer vision
CN115082441B (en) * 2022-07-22 2022-11-11 山东微山湖酒业有限公司 Retort material tiling method in wine brewing distillation process based on computer vision
CN117746076A (en) * 2024-02-19 2024-03-22 成都航空职业技术学院 Equipment image matching method based on machine vision
CN117746076B (en) * 2024-02-19 2024-04-26 成都航空职业技术学院 Equipment image matching method based on machine vision

Similar Documents

Publication Publication Date Title
CN102184551A (en) Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN104115192B (en) Three-dimensional closely interactive improvement or associated improvement
CN103049751A (en) Improved weighting region matching high-altitude video pedestrian recognizing method
CN105139015B (en) A kind of remote sensing images Clean water withdraw method
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
CN109712247B (en) Live-action training system based on mixed reality technology
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
CN103325126A (en) Video target tracking method under circumstance of scale change and shielding
Bešić et al. Dynamic object removal and spatio-temporal RGB-D inpainting via geometry-aware adversarial learning
CN107798313A (en) A kind of human posture recognition method, device, terminal and storage medium
CN103581614A (en) Method and system for tracking targets in video based on PTZ
CN108510520B (en) A kind of image processing method, device and AR equipment
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN110827320B (en) Target tracking method and device based on time sequence prediction
CN102034247A (en) Motion capture method for binocular vision image based on background modeling
CN103714556A (en) Moving target tracking method based on pyramid appearance model
CN102289822A (en) Method for tracking moving target collaboratively by multiple cameras
CN110334656A (en) Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
CN110197121A (en) Moving target detecting method, moving object detection module and monitoring system based on DirectShow
Tao et al. Indoor 3D semantic robot VSLAM based on mask regional convolutional neural network
CN104463962B (en) Three-dimensional scene reconstruction method based on GPS information video
CN110111368B (en) Human body posture recognition-based similar moving target detection and tracking method
Monteleone et al. Pedestrian tracking in 360 video by virtual PTZ cameras
Liu et al. Mean shift fusion color histogram algorithm for nonrigid complex target tracking in sports video
CN104392437B (en) Object tracking method based on state fusion of multiple cell blocks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110914