CN104376577A - Multi-camera multi-target tracking algorithm based on particle filtering - Google Patents

Multi-camera multi-target tracking algorithm based on particle filtering Download PDF

Info

Publication number
CN104376577A
CN104376577A CN201410564116.8A CN201410564116A CN104376577A CN 104376577 A CN104376577 A CN 104376577A CN 201410564116 A CN201410564116 A CN 201410564116A CN 104376577 A CN104376577 A CN 104376577A
Authority
CN
China
Prior art keywords
target
camera
particle filter
main shaft
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410564116.8A
Other languages
Chinese (zh)
Inventor
梁志伟
徐小根
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201410564116.8A priority Critical patent/CN104376577A/en
Publication of CN104376577A publication Critical patent/CN104376577A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-camera multi-target tracking algorithm based on particle filtering. Firstly, a second-order auto-regression motion model and an observation model based on a target color histogram and a motion histogram are established, tracking is carried out in each camera by utilizing particle filtering, target detection data are fused in a central processing unit through the cameras, and targets of the multiple cameras are effectively tracked based on particle filtering. The multi-camera multi-target tracking algorithm can achieve multi-camera multi-target robust tracking. Detection of the axis of a target person in each multiple camera is mainly based on the axis of a target person in a visual field and an intersection point between a ground point of another converted visual angle and a corresponding ground point of a first visual angle, and each intersection point is used for updating the ground point of the target person in the visual field of each single camera. The multi-camera multi-target tracking algorithm based on particle filtering utilizes a rapid and exquisite data fusion process based on the axes, and when the targets are near and in a combined state, the advantage of an MCMC sampling step in each camera is utilized.

Description

Based on the multi-cam multiple target tracking algorithm of particle filter
Technical field
The present invention relates to a kind of multi-cam multiple target tracking algorithm based on particle filter.
Background technology
For the method that single camera is followed the tracks of, for accurately following the tracks of multiple target and for the 3D position calculating target, multiple camera must be used in complex environment.Now, blocking process can because the use of multiple camera becomes simple.But, many methods only 2D visual angle as input information, block and process with new motion model mainly through continuing, use Kalman filtering or the Markov model more generally used.If process starts discretize, these methods can not carry out continuing to follow the tracks of exactly again.Tracking under multi-cam environment is divided into following three classes:
(1) based on the tracking of binary block: up-to-date technology proposes a kind of method based on dimensionality reduction to learn the corresponding relation between the outward appearance pedestrian crossing over multiple view, the method that Kalman Filter Technology is applied to a best hypothesis and many hypothesis compares, and follows the tracks of extract prospect binary point from multiple calibration point thus obtain three-dimensional position with finder.
(2) based on the tracking of color: a kind of system segment can be realized under the method, the wide baseline of nearly 16 synchronization cameras can be used in scene to arrange in order to the multiple target person of detection and tracking, and strength information is used directly the region of carrying out the classification of single camera view pixels and the same mark of coupling thus the position of target person under derivation three-dimensional situation.Two kinds of modes are had to carry out sheltering analysis.The first, classify to pixel, the calculating of prior probability needs inaccessible consideration.Second, evidence is the Aggregation computation of all cameras, and by the observability of each occupied point in each camera view on ground level, namely the map that there is possibility of ground level judges, the position of base area plane, then follows the tracks of in time and uses Kalman filter.No matter be follow the tracks of at the plane of delineation and top view, calculate 2D and the 3D position that each target is independent, to define the joint probability of the product of the display model based on color and 2D and the 3D motion model of deriving from a Kalman filter to greatest extent.
(3) based on tracking and the localization method of grid: nearest technology uses a discrete map to project to the object of camera review detection wherein clearly.External cvlab (computer vision laboratory) studies and achieves method floor grid being realized multiple target detection, and realizes multiobject tracking between multiple camera in conjunction with the method for dynamic programming.
Leistner etc. propose to use different cameral as the problem of different visual angles as same classification.Similar to the method for Khan with Shah, main thought utilizes the geometry of strong real world to limit (homography matrix between ground level and camera).Consider to set up n the camera in the visual field that partly overlaps, eachly can observe 3D scene, can obtain mutually with homography matrix based on the dot image coordinate system that ground level is identified, as are shown in figure 1.2.
Multitarget Tracking is all focus and the difficult point that computer vision follows the tracks of area research all the time, although be widely used in science and technology and life at present, but due to the complicacy of reality scene in various situation, also do not have a kind of current techique to go for every field at present.
In the research field of computer vision, target following is the most basic, most important Research Points, is also when next very active research field.Apply in (as the supervisory system on airport, the supervisory system etc. of community) in the tracking of actual life, the collaborative work between needing by a large amount of cameras could carry out accurate and consistent tracking to target.Therefore, multi-cam network is utilized to become to the tracking carrying out multiple target the emphasis that current goal follows the tracks of field.The tracker of multi-cam, for the tracker of single camera, has many incomparable advantages, if implementation is monitored, the target of following the tracks of is carried out to three-dimensional reconstruction, can be analyzed and explain the event etc. in guarded region on wide region.But multi-cam tracker also there will be much new problem, the problems such as the target detection under such as multi-cam environment and confirmation, information fusion and target following, these are all the emphasis and difficulties studied in current multi-cam tracking field.The camera network system of multiple camera composition just progressively becomes the main flow direction of field of video monitoring.
Summary of the invention
The track algorithm of multiple target under the invention provides a kind of multi-cam, based on the track algorithm basis of the tracking of particle filter realized between multiple camera under single camera, thus realize the real-time of tracking, then utilize the target occlusion problem occurred during display model process multiple target tracking respectively, the method has more sane blocks handling property.
Technical solution of the present invention is:
Based on a multi-cam multiple target tracking algorithm for particle filter,
Set up second-order auto-regressive motion model and based target color histogram and the histogrammic observation model that moves, utilize particle filter to follow the tracks of in each single camera,
Carry out merging at central processing unir by each camera target detection data and effectively follow the tracks of based on the target of particle filter to multiple camera.
Further, use Message Transmission framework to perform deduction from single camera to multi-cam, its result is similar to the tracking results of each tracker k on ground level, its posteriority probability tables show for
p ( X t , k , 0 | Z t , k ) ∝ Π j = 1 N t , k ω t , k , j ( X t , k , 0 ) ∫ p ( X t , k , 0 | x t - 1 , k , 0 ) p ( X t - 1 , k , 0 | Z t - 1 , i ) dX t - 1 , k , 0 - - - ( 23 )
Wherein, N t,kfor X t, k, 0correlation measurement number, ω t, k, j(X t, k, 0) be transmission of information from camera j to ground level, estimated value is X t, k, 0.
Further, use the feedback mechanism tracing into single camera tracking from multi-cam, this realizes correct tracking effect under making circumstance of occlusion; The suggestion distribution improved by single camera is realized:
p(X t,k,j|X t-1,k,j,X t-1,k,0)∝α t,k,jp(X t,k,j|X t-1,k,j)+(1-α t,k,j)p(X t,k,j|X t-1,k,0)
(27)
Use H jprojection obtains from plane tracker explicitly or sample from particle filter before, parameter alpha t, k, jcontrol its mixing; With mass measurement, this value α is set t, k, jt, k, j.
Further, the temporary transient priori of associating relation is utilized to carry out expedited data fusion treatment process.
Further, first check based on mahalanobis distance, if met, just keep, otherwise just search for corresponding matching relationship by following method: each potential contact (k 0, k j), the candidate value of mahalanobis distance is calculated as follows:
M k 0 , k j = 1 σ Δ 2 Δ 2 ( s ^ t , k j , j , C X ^ t , k 0 ) - - - ( 21 )
Wherein, for k javeraging projection segmentation, for k 0mean state;
Lower than threshold value mcandidate association value be rejected; Formula (21) calculate for k jdistance be only applicable to target k 0field remaining relating value follow and meet as follows:
k ^ 0 = arg min k 0 ∈ N k ^ j M k ^ j , k 0 k ^ j = arg min k j ∈ N k ^ 0 M k j , k ^ 0 - - - ( 22 ) .
Further, between multiple camera, main shaft detects coupling:
First the main shaft of each target person detected by each single camera and followed the tracks of, each video camera synchronously carries out;
Then by testing result being merged in conjunction with homography matrix, realize the coupling between multiple camera, thus obtain the unified testing result after corresponding fusion, and this result is fed back to each video camera detection in.
Further, the step of main shaft matching algorithm is as follows:
The main shaft combination of two of the target person detected in (1) two camera coverage, create one the likely table (θ) that matches of main shaft, calculate corresponding main shaft matching distance in these pairings;
(2) for table θ in each coupling to { m, n} will check whether they meet restrictive condition wherein D tthe true and false predefine threshold value of relation is tackled mutually for differentiating; If it is determined that be false, then { m, n} will delete in table θ, thus table θ includes and meets the right restriction relation of corresponding relation;
(3) to finding all pairing models list θ, pairing model Θ is expressed as by the max log l matched: Θ k = { ( L k 1 i , L k 1 ′ j ) , ( L k 2 i , L k 2 ′ j ) , . . . , ( L k l i , L k l ′ j ) }, Wherein k represents pairing model index;
(4) in pairing model Θ, look for and match the minimum value of model (λ) respective distance:
λ = arg min k ( Σ w = 1 l ( D ( k w , k w ′ ) ( i , j ) ) ) - - - ( 15 )
All at model Θ λin main shaft to be all pairing;
(5) markup model Θ λin main shaft pair.
Further, under single camera, the multiple target tracking specific algorithm based on MCMC sampling particle filter is as follows:
(1) in the t-1 moment, the state of target is by the particle collection of unallocated weight represent, each sample packages is containing united state X t - 1 r = { X t - 1 r , . . . , X n ( t - 1 ) r } ;
(2) initialization MCMC sampler, at moment t from predicted density extract X t, as Stochastic choice one associating sample and pass through motion model mobile wherein all target i;
(3) sample is obtained by MH iteration;
(4) t from middle sampling, represents the united state of target.
Further, MH iteration obtains the concrete steps of sample and is:
1) MH iterative step:
I, from former frame unallocated weight sample in random selecting associating sample
Ii, from n target random selecting target i, this target is by be considered to will the target of iteration;
Iii, use i thdbjective state, from its motion model conditional samples, and obtains X' it;
2) acceptance rate is calculated:
r = min ( 1 , P ( Z it | X it ′ ) Π j ∈ E i ψ ( X it ′ , X jt ′ ) P ( Z it | X it ) Π j ∈ E i ψ ( X it , X jt ) ) - - - ( 8 )
Wherein, ψ (X' it, X' jt) represent interaction between two particles between two particle;
3) if r>=1, then X' is received it, make X tin i-th target be X' it; Otherwise receive with probability r; Refused if received, then thought this change of target state;
4) current X tbe increased in new sample set as copy.
The invention has the beneficial effects as follows: this algorithm can realize multiple target and robustly follow the tracks of between multiple camera.Between multiple camera, the detection of target person main shaft is mainly based on the intersection point of " ground point " corresponding with the first visual angle in the main shaft of target person in the visual field and another visual angle of conversion, and this intersection point is for upgrading " ground point " of target person in the single camera visual field.This algorithm make use of a kind of quick and careful data fusion process based on main shaft, when each target of target is separated by comparatively near, during as a united state, make use of the advantage of MCMC sampling procedure in each single camera.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the target following under single camera.
Fig. 2 is the gauge point evenly chosen under different visual angles, in figure: (a) is the image under first visual angle, and (b) is the image under second visual angle.
Fig. 3 is the explanation block diagram that multi-cam lower main axis detects.
Fig. 4 is target projection relation schematic diagram between various visual angles.
Fig. 5 is that schematic diagram is compared in the quantification of each tracker, in figure: (a) is the associating particle filter of 1000 particles, (b) 20 independent particle filters, each wave filter 50 particles, c () is MCMC particle filter, 1000 particles.
Fig. 6 is experimental situation tracking effect figure once, in figure: (a) is two Measures compare, and (b) is pursuit path.
Fig. 7 is that experimental situation one track compares schematic diagram.
Fig. 8 is experimental situation two pursuit path figure.
Fig. 9 is that experimental situation two track compares schematic diagram.
Embodiment
The preferred embodiments of the present invention are described in detail below in conjunction with accompanying drawing.
Embodiment is a kind of multi-cam multiple target tracking algorithm based on particle filter.First second-order auto-regressive motion model and based target color histogram and the histogrammic observation model that moves is set up in algorithm, utilize particle filter to follow the tracks of in each single camera, then carry out merging at central processing unir by each camera target detection data and effectively follow the tracks of based on the target of particle filter to multiple camera.
Single camera based on particle filter is followed the tracks of
Motion model adopts the described second-order autoregressive model of upper joint
X k-X k-1=X k-1-X k-2-N k(1)
Wherein, X kfor k moment dbjective state, N kbe average be 0, variance is σ kbinary gaussian random noise, namely with Probability Forms, this motion model is expressed as the state transition model p (X of a Gaussian distribution k| X k-1).This model is the weak motion model that a precision is not high, represents constant velocity motion, has certain adaptive faculty.
Observation model
Adopt in embodiment based on color and the histogrammic method determination display model of motion.In all situations, from dbjective state vector X t,kreference histograms h ref, kto actual histogram h kpossibility all estimate by Pasteur distance D.In embodiment, color histogram is defined in (histogram in rgb space in, c ∈ R, G, B), the possibility parametrization of this kind of mode affects tracking performance not too largely, is defined as follows:
P ( Z t , k col | X t , k ) ∝ exp ( - Σ c ∈ R , G , B D 2 ( h k c , h ref , k c ) 2 σ c 2 ) - - - ( 2 )
Wherein, represent the color part in observation, c represents Color Channel, for the expected variance of Pasteur's distance.σ clarger, then more can adapt to the robustness of illumination variation.Spatial information obtains by color distribution simultaneously, is obtained by the first half of borderline region and the segmentation of the latter half.Be consistent in formula (2), do not consider the problem of two dimension in the histogram of each passage.
In addition, used the image motion model based on the absolute deviation in consecutive image in embodiment, this deviation can at histogram middle accumulation.At first, reference histograms is chosen as consistent, and when tracking target, in order to not consider background dot, the method is similar with aforesaid method effect.Corresponding dbjective state possibility motion model is such as formula shown in (3):
P ( Z t , k m | X t , k ) ∝ exp ( - D 2 ( h k m , h ref , k m ) 2 σ m 2 ) - - - ( 3 )
Wherein, for the motion parts in observation, for the expectation variance of result distance.Then the possibility model of dbjective state is finally expressed as:
P ( Z t , k | X t , k ) = P ( Z t , k m | X t , k ) P ( Z t , k col | X t , k ) - - - ( 4 )
Based on the particle filter of MCMC under single camera
Figure 1 shows that the process of target following under single camera.Based on through the input picture of background difference, target person can be divided into three classes: single pinpoint target, unscreened several target, has several targets of blocking.The main shaft of target person detects and will carry out in above-mentioned three kinds of situations, target after detection is by the vertical relation on target person and ground, obtain one " ground point ", this will become the more specific location information of the target detected, thus is that follow-up tracking is ready.
In order to effectively sample from the posterior probability of decomposition goal, use Markov chain Monte-Carlo (MCMC) sampling at this.In fact, instead of invalid importance resampling steps by MCMC sampling procedure.
MCMC method is passed through to produce status switch, at the configurations X of the common objective of moment t tin there is the performance gathering from target distribution and generate state estimation sample, for this reason, at configurations X tdefinition space Markov chain, makes its stationary distribution be target distribution.Metropolis-Hastings (MH) algorithm is a kind of method that simulation generates this chain.
MH algorithm in parameter space random value, as starting point.Generate random parameter according to the probability distribution of parameter, according to the combination of this series of parameters, calculate the probability density of current point.Judge whether to retain current point according to current point and the starting point probability density ratio random number whether be greater than between (0,1).
If the probability density of current point is greater than this random number, this state is just claimed to be receive status, now, under the prerequisite meeting parameter probability distribution, continue the combination randomly drawing parameter, as more lower, calculate the probability density of a bit, and calculate the ratio of lower some probability density and probability density, and continue circulation.
If current point is not accepted, then continue under the prerequisite meeting parameter probability distribution, continue to generate random number, as new parameter combinations, until parameter combinations can be accepted.
MH algorithm is as follows:
(1) initialization effective original state X t, to each expected sample iteration once;
(2) by suggestion density Q (X ' t; X t) new sample distribution X ' is proposed t;
At this, by direct from choose target decomposition motion model only change the state of an a certain moment target:
Q ( X t ′ ; X t ) = 1 N Q ( X t ′ | X t , i ) = 1 N Σ r P ( X it ′ | X i ( t - 1 ) r ) Π j ≠ i δ ( X jt ′ = X i ) - - - ( 5 )
Wherein, the probability that each target is selected is identical, i=1,2...n (n target).
(3) acceptance probability is calculated
p = P ( X t ′ | Z t ) Q ( X t ; X t ′ ) P ( X t | Z t ) Q ( X t ′ ; X t ) - - - ( 6 )
(4) if p>=1, namely X ' is received tand make X t← X ' t.Otherwise accept with Probability p.If accept to be rejected, keep that this state is constant (returns X as sample t).
Multiple target tracking problem can be expressed with Bayesian filter.At this iteratively from all n target { X it| i ∈ 1 ..., upgrade Posterior distrbutionp P (X in the united state of n} t| Z t), provide the renewal Z of all observer states in t t={ Z 1..., Z t}
P ( X t | Z t ) = kP ( Z t | X t ) ∫ X t - 11 P ( X t | X t - 1 ) P ( X t - 1 | Z t - 1 ) - - - ( 7 )
In formula (7), P (X t| Z t) represent measurement model, i.e. known state X under moment t tthe measured value Z observed in situation tprobability, motion model P (X t| X t-1) prediction t previous state X t-1current state X t.Hypothetical probabilities all targets are separate under known conditions.
In sum, the multiple target tracking specific algorithm based on MCMC sampling particle filter is as follows:
(1) in the t-1 moment, the state of target is by the particle collection of unallocated weight represent, each sample packages is containing united state X t - 1 r = { X t - 1 r , . . . , X n ( t - 1 ) r } .
(2) initialization MCMC sampler, at moment t from predicted density extract X t, as Stochastic choice one associating sample and pass through motion model mobile wherein all target i.
(3) sample is obtained by MH iteration:
1) iterative step:
I, from former frame unallocated weight sample in random selecting associating sample
Ii, from n target random selecting target i, this target is by be considered to will the target of iteration;
Iii, use i thdbjective state, from its motion model conditional samples, and obtains X ' it;
2) acceptance rate is calculated:
r = min ( 1 , P ( Z it | X it ′ ) Π j ∈ E i ψ ( X it ′ , X jt ′ ) P ( Z it | X it ) Π j ∈ E i ψ ( X it , X jt ) ) - - - ( 8 )
Wherein, ψ (X ' it, X ' jt) represent interaction between two particles between two particle;
3) if r>=1, then X ' is received it, make X tin i-th target be X ' it.Otherwise receive with probability r.Refused if received, then thought this change of target state;
4) current X tbe increased in new sample set as copy.
(4) t from middle sampling, represents the united state of target.
In order to process multiple target, in embodiment, adopt following measures:
1) distance between target uses independent filtering time relatively far away;
2) when more target from closer time increase wave filter number.
Tracking under multi-cam
In the Target Tracking System of multi-cam, each video camera needs cooperation to get up to follow the tracks of multiple target, therefore the coupling between multiple video camera is a major issue, and its list relating to target in each camera coverage should be related to and the coupling of target information.In essence, the tracking under multiple-camera is exactly the problem of multi-video camera information coupling, and what it will solve is the corresponding relation of target person in each camera coverage under synchronization sets up different visual angles.Only have these relations to set up correct, the tracking of robust can be realized.The problem such as coupling of homography and geometric relationship, multiple-camera main shaft in each video camera of following solution.
Homography matrix and geometric relationship
In computer vision, the homography of plane is defined as the projection mapping from a plane to another plane.
In order to determine the existence of homography matrix, must determine to share a ground level in different visual angles.This hypothesis is set up in most of monitoring scene.At these definition one 3 × 3 matrixes:
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1 - - - ( 9 )
Make (x i, y i) and (x ' i, y ' i) be point corresponding in two visual angles on ground level, these 2 can be associating by homography matrix H:
x i ′ y i ′ 1 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1 x i y i 1 - - - ( 10 )
From formula (10), homography matrix has 8 parameters, only has the many more corresponding point of this four couple given just can combine equation to solve these parameters.Typically, the acquisition of corresponding point has two kinds of methods: manual method, namely arranges monumented point tasting to the greatest extent; Another kind of for utilizing algorithm automatically to extract corresponding point; In an embodiment, in order to the accuracy of homography matrix, manual mode is adopted to obtain corresponding point.Evenly choose at area-of-interest in advance and what time mark, utilize the gauge point under different visual angles to ask homography matrix thus.The gauge point that Fig. 2 evenly chooses under giving different visual angles.
After utilizing the point of these correspondences to set up system of equations, a lot of algorithm just can be adopted to carry out solving equation, the parameter of homography matrix can solve and obtain under lowest mean square meaning: H=(A ta) -1a tb, wherein matrix A, H, b are expressed as follows respectively:
The coupling of various visual angles lower main axis information
Different cameras detects in its image coordinate system how the main shaft of target sets up corresponding relation, be by multi-cam between homography matrix limit.
Fig. 3 has set forth main shaft between multiple camera and has detected the basic procedure of coupling.First the main shaft of each target person detected by each single camera and followed the tracks of, each video camera synchronously carries out, then by testing result being merged in conjunction with homography matrix, realize the coupling between multiple camera, thus the unified testing result obtained after corresponding fusion, and this result is fed back to again each video camera detection in.The realization of feedback mechanism, enhances the robustness that main shaft detects target person method, thus realizes the accurate detection of successive frame in multiple camera.
Between multi-cam, the coupling of respective major axes is mainly based on the detection of the main shaft of single camera.In the process of different camera view coupling main shaft, need to use homography matrix and be used as geometry restriction.This section defines a main shaft adaptation function to carry out the coupling of target main shaft.Next, first set forth the matching algorithm between two video cameras, then explain how this algorithm is applied on plural camera.
Suppose, at moment t, from camera i, to observe the main shaft of M target person the main shaft of N number of target person is observed from camera j in embodiment, algorithm finds main shaft pair, and the value of the respective distance making this main shaft right is minimum.Therefore, main shaft has just been matched to an entirety, avoids by the latent fault simply selecting minor increment (as greedy algorithm) to cause.
Between different visual angles before the definition of main shaft adaptation function, need the corresponding relation determining main shaft between each visual angle.As shown in Figure 4, for the main shaft of target person r in the image coordinate system of camera i, for " ground point ", i.e. the perpendicular intersection of target person and ground level.L rfor the main shaft of target person r in 3d space, X rfor L r" ground point ". for L rthe projection line of the image coordinate system of camera i is tied to from the 3d space coordinate of ground level.Obviously, also be projection in the image coordinate system of camera i.For the target person q in camera j, and definition similar.Make H ijthe image coordinate that image coordinate for camera i is tied to camera j ties up to the homography matrix of corresponding relation on ground level. for the line in the image coordinate system of camera j, this line passes through by H ijthe image coordinate system being tied to camera j from the image coordinate of camera i obtains after transforming obviously also be in the image coordinate system of camera j projection.Order for with intersection point.Obviously, in the 3 d space, if having target person r in the image coordinate system of camera i, target person q is had in the image coordinate system of camera j, then represent " ground point " of this target person main shaft in the image coordinate system of camera j.
Find the main shaft of all couplings to after, when target person during same ground level, just can utilize corresponding information to improve tracking performance in each single camera visual field in any two camera coverages.Because the main shaft of target person in each camera coverage can robust and accurately detecting, in a visual field, the main shaft of target person has very strong robustness and accuracy with " ground point " the real point in first visual field obtained to the intersection point of the conversion line of the main shaft of the target person in first visual field by another visual field.The observed value of " ground point " that detect in first visual field before therefore here upgrading with this point of crossing.As shown in Figure 4, at main shaft with during corresponding to same target person, intersection point estimation will be represented more accurately " ground point ".Therefore, even if in all camera coverages " ground point " of target person invisible (be blocked or do not detect), this intersection point also can find, and therefore " ground point " of target person can accurately navigate to.
Therefore, the value of " ground point " that detect with intersection point between distance can be used for estimating corresponding main shaft with corresponding possibility.This distance is shorter, and main shaft then more mates.
Same mode, can obtain the intersection point in the image coordinate system of camera i the value of " ground point " that detect with intersection point between distance equally also can determine with estimation.Therefore, with corresponding main shaft adaptation function be defined as follows:
F ( L r i , L q j ) = p ( X r i | Q qr ij ) p ( X q j | Q rq ji ) - - - ( 12 )
In order to specify preventing loss in most information situation with likelihood value, suppose that this probability distribution is Gaussian distribution, herein means regular inspection survey in noise profile be Gaussian distribution.Definition and as follows:
p ( X r i | Q qr ij ) = 2 π ( | Σ r i | ) - 1 / 2 exp { - 1 2 ( X r i - Q qr ji ) ( Σ r i ) - 1 ( X r i - Q qr ji ) T } - - - ( 13 )
p ( X q j | Q rq ji ) = 2 π ( | Σ q j | ) - 1 / 2 exp { - 1 2 ( X q j - Q rq ij ) ( Σ q j ) - 1 ( X q j - Q rq ij ) T } - - - ( 14 )
Wherein, with be two covariance matrixes.Because coordinate x and coordinate y is separate, be one by two parts and the diagonal matrix of composition, be then by and the diagonal matrix of composition.
with parameter by the authority distance estimations of " ground point " that observe in every frame and corresponding intersection point.In fact with independently picture position can be thought, as
In order to simplify calculating, in this definition respective distances for main shaft pair, then can be obtained by formula 12-13:
D rq ij = ( X r i - Q qr ji ) ( Σ i ) - 1 ( X r i - Q qr ji ) T + ( X r i - Q qr ji ) ( Σ i ) - 1 ( X r i - Q qr ji ) T - - - ( 14 )
be worth less, main shaft matching degree higher.
The key step of main shaft matching algorithm is as follows:
The main shaft combination of two of the target person detected in (1) two camera coverage.Create one the likely table (θ) that matches of main shaft, calculate corresponding main shaft matching distance in these pairings.
(2) for table θ in each coupling to { m, n} will check whether they meet restrictive condition wherein D tthe true and false predefine threshold value of relation is tackled mutually for differentiating.If it is determined that be false, then { m, n} will delete in table θ.Thus table θ includes and meets the right restriction relation of corresponding relation.
(3) to finding all pairing models list θ.Pairing model Θ is expressed as by the max log l matched:
Θ k = { ( L k 1 i , L k 1 ′ j ) , ( L k 2 i , L k 2 ′ j ) , . . . , ( L k l i , L k l ′ j ) }, Wherein, k represents pairing model index.
(4) in pairing model Θ, look for and match the minimum value of model (λ) respective distance:
λ = arg min k ( Σ w = 1 l ( D ( k w , k w ′ ) ( i , j ) ) ) - - - ( 15 )
All at model Θ λin main shaft to be all pairing.
(5) markup model Θ λin main shaft pair.
Data fusion
In order to the trace information each camera shows on ground level, need the projection from image to ground level.Need by homography matrix H=(h for this reason ij) obtain from non-linear view map to physical plane coordinate track, as shown in Equation 16:
f H ( x , y ) = ( h 11 x + h 12 y + h 13 h 31 x + h 32 y + h 33 , h 21 x + h 22 y + h 23 h 31 x + h 32 y + h 33 ) T - - - ( 16 )
As mentioned above, the tracker k in video camera j jpass through function project in reference planes, when note parameter j occurs, to distinguish other video cameras.Owing to there is certain perpendicular positioning error based on the tracker of motion model and color model, at this based on combining on the position not being projected to average tracking device of tracking, but project in the distance, delta of main shaft.In theory, in reference planes, in different cameras, the main shaft projection of target is visible.In order to estimate the candidate state of combining, in this distance, first estimate the variable expected, based on single camera and multi-cam.Unscented Transform UT is used at this.Target directory k 0(on ground level) and k j(video camera j) can form a 4D united state corresponding covariance Σ UT = Σ ^ t , k 0 , 0 0 0 Σ ^ t , k j , j . All images and ground level distribution moment all pass through particle weight state represents ( represent particle state, represent particle weights).State computation is as follows:
X ^ t , k = Σ n = 1 N π t , k ( n ) CX t , k ( n ) - - - ( 17 )
Σ ^ t , k = Σ n = 1 N π t , k ( n ) ( CX t , k ( n ) - X ^ t , k ) ( CX t , k ( n ) - X ^ t , k ) T - - - ( 18 )
Application Unscented Transform, at the covariance matrix ∑ of main shaft uTin choose 9 points and a weight w is set for each point l.For arbitrarily calculate its main shaft with the simplest geometry, vertical image part is constant height, image the first half and the latter half.This part is projected through the image section obtained is therefore, the point of Unscented Transform estimation or the expectation variance of part are
Δ ^ = Σ l ω l Δ ( s t , k j , j l , CX t , k 0 , 0 l ) - - - ( 19 )
σ Δ 2 = Σ l ω l ( Δ ( s t , k j , j l , X t , k 0 , 0 l ) - Δ ^ ) 2 - - - ( 20 )
Wherein, Δ is the cut-point of distance function.Therefore, each potential contact (k is estimated 0, k j), the candidate value of mahalanobis distance is calculated as follows:
M k 0 , k j = 1 σ Δ 2 Δ 2 ( s ^ t , k j , j , C X ^ t , k 0 ) - - - ( 21 )
Wherein, for k javeraging projection segmentation, for k 0mean state.Lower than threshold value mcandidate association value be rejected.Formula 21 calculate for k jdistance be only applicable to target k 0field (in Euclidean distance).Remaining relating value follow and meet as follows:
k ^ 0 = arg min k 0 ∈ N k ^ j M k ^ j , k 0 k ^ j = arg min k j ∈ N k ^ 0 M k j , k ^ 0 - - - ( 22 )
The embodiment temporary transient priori of the relation of combining accelerates processing procedure, such as, at moment t-1, and target k on ground level 0be associated as k j, first check based on mahalanobis distance, if met, just keep, otherwise just search for corresponding matching relationship by said method.
Multi-cam multiple target tracking
The tracking of multi-cam is based on single tracking of making a video recording, and embodiment use Message Transmission framework performs the deduction from single camera to multi-cam.Its result is similar to the tracking results of each tracker k on ground level, its posteriority Probability p (X t, k, 0| Z t,k) can be expressed as
p ( X t , k , 0 | Z t , k ) ∝ Π j = 1 N t , k ω t , k , j ( X t , k , 0 ) ∫ p ( X t , k , 0 | x t - 1 , k , 0 ) p ( X t - 1 , k , 0 | Z t - 1 , i ) dX t - 1 , k , 0 - - - ( 23 )
Wherein, N t,kfor X t, k, 0correlation measurement number, ω t, k, j(X t, k, 0) be transmission of information from camera j to ground level, estimated value is X t, k, 0.The particle filter of single camera can be expressed as:
ω t , k , j ( X t , k , 0 ) ∝ Σ n = 0 N π t , k , j ( n ) ψ k , j ( X t , k , 0 , X t , k , j ( n ) ) - - - ( 24 )
Consider ground level to be followed the tracks of accordingly exponential sum (k) in camera j simply identical.ψ k, t, jfor the potential function projected based on main shaft, be defined as follows:
ψ k , j ( X t , k , 0 ( n ) , X t , k , j ( m ) ) ∝ exp ( - Δ 2 ( X t , k , 0 ( n ) , s t , k , j ( m ) ) ) - - - ( 25 )
Wherein, represent the particle n of target k in reference planes, for the main shaft of respective objects in camera j from particle m projects.Each particle m forms information to in the weight of particle n:
π t , k , 0 ( n ) ∝ Π j = 1 N t , k ω t , k , j ( X t , k , 0 ( n ) ) - - - ( 26 )
Wherein, with for estimated state with weight, for from all wave filters arrive the weight that information is transmitted.When the particle in reference planes from main shaft more close to time, weighted value is higher, and the particle therefore near different camera target main shaft will have the highest weight.
Use the feedback mechanism tracing into single camera tracking from multi-cam, this realizes correct tracking effect under making circumstance of occlusion.The suggestion distribution that the method is improved by single camera realizes:
p(X t,k,j|X t-1,k,j,X t-1,k,0)∝α t,k,jp(X t,k,j|X t-1,k,j)+(1-α t,k,j)p(X t,k,j|X t-1,k,0) (27)
Use H jprojection obtains from plane tracker explicitly or sample from particle filter before.Parameter alpha t, k, jcontrol its mixing.Arranging its value is 0,5, if but tracker operational excellence, this value can change, and does not also need the information from other cameras.But, if tracker performance is not good, then need the support of global information.Therefore, in embodiment, this value α is set with mass measurement t, k, jt, k, j.
Experimental result
In order to check the performance assessing above-mentioned method for tracking target, embodiment is tested respectively under two indoor environments.Experimental situation one realizes following the tracks of 4 target person under two cameras, and experimental situation two is followed the tracks of two target person under three cameras.Tracking under each experimental situation can indicate the bounding box of target person and draw corresponding pursuit path.
Test each method pursuit path analysis and compare
Multi-object tracking method under current multi-cam is divided into based on the method in region and the method based on point.Color is typical character pair in based on the method in region, and because color is quite unreliable in corresponding target, it is applied to seldom separately in detection, usual and other Fusion Features.Therefore, at this, method in embodiment is compared with based on the detection method put.
That shows in table 1 follows the tracks of the statistics of mortality for method described in embodiment in 5000 frames.The tracker used in embodiment identifies that the deviation of target location is the pseudo-ground truth position of 50 pixels automatically.As can be seen from the table, use MCMC method particle filter, the sample number of particle is higher, and it is fewer that it follows the tracks of unsuccessfully number.
The tracking mortality observed in table 15000 frame
Tracking Sample number Mortality
MCMC 50 2.4%
MCMC 100 0.8%
MCMC 200 0.5%
MCMC 1000 0.3%
Independent particle filter Every 10 targets 2.9%
Independent particle filter Every 50 targets 2.4%
Independent particle filter Every 100 targets 2.3%
Associating particle filter 50 10.8%
Associating particle filter 100 10.4%
Associating particle filter 200 9.6%
Associating particle filter 1000 7.8%
Fig. 5 is by comparing 3 kinds of different samples, and quantize the tracking results illustrating each tracker, often kind of tracker comprises 10000 samples, follows the tracks of 3 target person.In figure, the ordinate of each tracker represents error tracking rate, and horizontal ordinate represents sample number.As seen from the figure, under equal conditions, the more multiple independent particle filter error rate of associating particle filter is higher, but multiple independent particle filter increases operand and the operation time of tracker, and use MCMC particle filter be all better than on time and operand above both, and the error rate of this tracker is lower.
Directly embodiment method with based on point method compare very difficult, because diverse ways has different geometrical constraints and application.Matching algorithm is had nothing in common with each other, and the output of many methods is the movement locus of target, therefore, using target trajectory as method in embodiment and comparing based on the method basis of putting.Because center of fiqure (center corresponding to target person or the independent foreground area of people) is the characteristic point of most, therefore using center of fiqure as selected element.
The track of each target in movement images plane in experiment.Directly determine that owing to there is no rule which track is more accurate, at this true center of fiqure track obtained more by hand and center of fiqure track, and the comparison of corresponding " ground point " track.With estimating in every frame that the mean distance of center of fiqure and true center of fiqure measures the mistake of center of fiqure track, and in every frame, estimate " ground point " and the true true center of fiqure of " ground point " and the mistake of " ground point " track thereof.
Comparative result is visible in Fig. 6 (a).In this frame, camera 1 has three people in the visual field, and camera 2 has four people in the visual field, and wherein a people is almost blocked completely.
The comparison tracking results of Fig. 6 (b) as shown in the figure.
Fig. 7 (a) is track that in camera 2 visual field, in embodiment, method obtains and the comparing of true " ground point " data track.In embodiment, track mistake is 3.2 pixels.The track that Fig. 7 (b) arrives for tracing figure gains in depth of comprehension compares with true center of fiqure data track.Center of fiqure track is 5.8 pixels.As seen from Figure 7, the tracking used in embodiment is more accurate and level and smooth than center of fiqure track.If target is blocked, the center of fiqure of estimation is also always away from real center of fiqure, but " ground point " and true " ground point " of estimation is very near.
Also carry out similar comparison in experimental situation two, Figure 8 shows that tracking results in tracking three camera view.What Fig. 9 showed is comparative result, and in embodiment, the track mistake of method is 2.2 pixels, and the track mistake of center of fiqure track is 4.5 pixels.Wherein 9 (a) is track that in camera 2 visual field, in embodiment, method obtains and the comparative result of true " ground point " data track, the comparative result of the track that 9 (b) arrives for tracing figure gains in depth of comprehension and true center of fiqure data track.
The algorithm that above-mentioned all methods demonstrate target following in embodiment is than the method based on center of fiqure is more efficient and robustness is stronger.
Test each method contrast of two tracking performances
In order to accuracy and the robustness of algorithm for estimating, compare with other classical ways in the method for metering and computer vision field in experiment, the reduced parameter of tracking performance is as follows:
(1) normalization multi-target detection precision (Normalized Multiple Object DetectionPrecision, N-MODP), its reaction be verification and measurement ratio and the accuracy of detection of target.
(2) multiple target tracking precision (Multiple Object Tracking Precision, MOTP), measurement be tracking accuracy.
(3) frame sequence accuracy in detection (Sequence Frame Detection Accuracy, SFDA), measurement be tracking accuracy.
(4) average tracking degree of accuracy (Average Tracking Accuracy, ATA), measurement be tracking accuracy.
By four kinds of method assessment tracking performances in experiment: there is SIR independence filtering (SIR-SM, SIR-CM) merging and do not have to merge; There is mixing SIR/MCMC filtering (MCMC-SM, MCMC-CM) merging and do not have to merge.
The each method tracking results of table 2 laboratory environment one compares
Wave filter SFDA ATA N-MODP MOTP
MCMC-SM 0.21 0.19 0.06 0.31
MCMC-CM 0.28 0.38 0.08 0.61
SIR-SM 0.21 0.27 0.05 0.40
SIR-CM 0.24 0.27 0.06 0.43
Can find out in table 2, in laboratory environment one, best tracking performance is utilize MCMC tracking under multi-cam.On the other hand, in front view, using SIR-CM tracking performance best under multi-cam, may be that modeling time therefore alternately reduces the accuracy of tracking because target distance is from too near in such cases.
The each method tracking results of table 3 laboratory environment two compares
Wave filter SFDA ATA N-MODP MOTP
MCMC-SM 0.27 0.44 0.06 0.48
MAMC-CM 0.15 0.2 0.04 0.42
SIR-SM 0.24 0.35 0.06 0.45
SIR-CM 0.34 0.28 0.07 0.50
Can find out in table 3, MCMC-CM method comparatively SFDA and MODP has better space detection property and multiple goal to have better space-time detecting and tracking performance (ATA and MOTP), and this respect SIR-CM has similar result.
Thus, MCMC has better tracking performance: block and can reduce its accuracy, but this feature also allow for the target of not losing mistake, and therefore, the method can think to have more accurate tracking performance and more level and smooth path under multiple camera.
Embodiment has set forth the track algorithm based on example filtering under a kind of multi-cam environment, and this algorithm can realize multiple target and robustly follow the tracks of between multiple camera.Between multiple camera, the detection of target person central shaft is mainly based on the intersection point of " ground point " corresponding with the first visual angle in the central shaft of target person in the visual field and another visual angle of conversion, and this intersection point is for upgrading " ground point " of target person in the single camera visual field.This algorithm make use of a kind of quick and careful data fusion process based on central shaft, when each target of target is separated by comparatively near, during as a united state, make use of the advantage of MCMC sampling procedure in each single camera.This algorithm has certain robustness to blocking, and generates level and smooth pursuit path and intact compared to other algorithm keeps track performances in experimental situation.

Claims (9)

1., based on a multi-cam multiple target tracking algorithm for particle filter, it is characterized in that:
Set up second-order auto-regressive motion model and based target color histogram and the histogrammic observation model that moves, utilize particle filter to follow the tracks of in each single camera,
Carry out merging at central processing unir by each camera target detection data and effectively follow the tracks of based on the target of particle filter to multiple camera.
2. as claimed in claim 1 based on the multi-cam multiple target tracking algorithm of particle filter, it is characterized in that: use Message Transmission framework performs the deduction from single camera to multi-cam, its result is similar to the tracking results of each tracker k on ground level, its posteriority probability tables show for
P ( X t , k , 0 | Z t , k ) ∝ Π j = 1 N t , k ω t , k , j ( x t , k , 0 ) ∫ p ( X t , k , 0 | X t - 1 , k , 0 ) p ( X t - 1 , k , 0 | Z t - 1 , i ) d X t - 1 , k , 0 - - - ( 23 )
Wherein, N t,kfor X t, k, 0correlation measurement number, ω t, k, j(X t, k, 0) be transmission of information from camera j to ground level, estimated value is X t, k, 0.
3. as claimed in claim 2 based on the multi-cam multiple target tracking algorithm of particle filter, it is characterized in that, use the feedback mechanism tracing into single camera tracking from multi-cam, this realizes correct tracking effect under making circumstance of occlusion; The suggestion distribution improved by single camera is realized:
p(X t,k,j|X t-1,k,j,X t-1,k,0)∝α t,k,jp(X t,k,j|X t-1,k,j)+(1-α t,k,j)p(X t,k,j|X t-1,k,0)
(27)
Use H jprojection obtains from plane tracker explicitly or sample from particle filter before, parameter alpha t, k, jcontrol its mixing; With mass measurement, this value α is set t, k, jt, k, j.
4. the multi-cam multiple target tracking algorithm based on particle filter as described in any one of claim 1-3, is characterized in that, utilizes the temporary transient priori of associating relation to carry out expedited data fusion treatment process.
5. as claimed in claim 4 based on the multi-cam multiple target tracking algorithm of particle filter, it is characterized in that, first check based on mahalanobis distance, if met, just keep, otherwise just search for corresponding matching relationship by following method: each potential contact (k 0, k j), the candidate value of mahalanobis distance is calculated as follows:
M k 0 , k j = 1 σ Δ 2 Δ 2 ( s ^ t , k j , j , C X ^ t , k 0 ) - - - ( 21 )
Wherein, for k javeraging projection segmentation, for k 0mean state;
Lower than threshold value mcandidate association value be rejected; Formula (21) calculate for k jdistance be only applicable to target k 0field remaining relating value follow and meet as follows:
k ^ 0 = arg min k 0 ∈ N k ^ j M k ^ j , k 0 k ^ j = atg min k j ∈ N k ^ 0 M k j , k ^ 0 - - - ( 22 ) .
6. as claimed in claim 5 based on the multi-cam multiple target tracking algorithm of particle filter, it is characterized in that, between multiple camera, main shaft detects coupling:
First the main shaft of each target person detected by each single camera and followed the tracks of, each video camera synchronously carries out;
Then by testing result being merged in conjunction with homography matrix, realize the coupling between multiple camera, thus obtain the unified testing result after corresponding fusion, and this result is fed back to each video camera detection in.
7., as claimed in claim 6 based on the multi-cam multiple target tracking algorithm of particle filter, it is characterized in that: the step of main shaft matching algorithm is as follows:
The main shaft combination of two of the target person detected in (1) two camera coverage, create one the likely table (θ) that matches of main shaft, calculate corresponding main shaft matching distance in these pairings;
(2) for table θ in each coupling to { m, n} will check whether they meet restrictive condition wherein D tthe true and false predefine threshold value of relation is tackled mutually for differentiating; If it is determined that be false, then { m, n} will delete in table θ, thus table θ includes and meets the right restriction relation of corresponding relation;
(3) to finding all pairing models list θ, pairing model Θ is expressed as by the max log l matched: Θ k = { ( L k 1 i , L k 1 ′ j ) , ( L k 2 i , L k 2 ′ j ) , . . . , ( L k l i , L k l ′ j ) } , Wherein k represents pairing model index;
(4) in pairing model Θ, look for and match the minimum value of model (λ) respective distance:
λ = atg min k ( Σ w = 1 l ( D ( k w , k w ′ ) ( i , j ) ) ) - - - ( 15 )
All at model Θ λin main shaft to be all pairing;
(5) markup model Θ λin main shaft pair.
8. as claimed in claim 7 based on the multi-cam multiple target tracking algorithm of particle filter, it is characterized in that: under single camera, the multiple target tracking specific algorithm based on MCMC sampling particle filter is as follows:
(1) in the t-1 moment, the state of target is by the particle collection of unallocated weight represent, each sample packages is containing united state X t - 1 r = { X t - 1 r , . . . , X n ( t - 1 ) r } ;
(2) initialization MCMC sampler, at moment t from predicted density extract X t, as Stochastic choice one associating sample and pass through motion model mobile wherein all target i;
(3) sample is obtained by MH iteration;
(4) t from middle sampling, represents the united state of target.
9., as claimed in claim 8 based on the multi-cam multiple target tracking algorithm of particle filter, it is characterized in that, the concrete steps that MH iteration obtains sample are:
1) MH iterative step:
I, from former frame unallocated weight sample in random selecting associating sample
Ii, from n target random selecting target i, this target is by be considered to will the target of iteration;
Iii, use i thdbjective state, from its motion model conditional samples, and obtains X' it;
2) acceptance rate is calculated:
r = min ( 1 , P ( Z it | X it ′ ) Π j ∈ E i ψ ( X it ′ , X jt ′ ) P ( Z it | X it ) Π j ∈ E i ψ ( X it , X jt ) ) - - - ( 8 )
Wherein, ψ (X' it, X' jt) represent interaction between two particles between two particle;
3) if r>=1, then X' is received it, make X tin i-th target be X' it; Otherwise receive with probability r; Refused if received, then thought this change of target state;
4) current X tbe increased in new sample set as copy.
CN201410564116.8A 2014-10-21 2014-10-21 Multi-camera multi-target tracking algorithm based on particle filtering Pending CN104376577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410564116.8A CN104376577A (en) 2014-10-21 2014-10-21 Multi-camera multi-target tracking algorithm based on particle filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410564116.8A CN104376577A (en) 2014-10-21 2014-10-21 Multi-camera multi-target tracking algorithm based on particle filtering

Publications (1)

Publication Number Publication Date
CN104376577A true CN104376577A (en) 2015-02-25

Family

ID=52555467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410564116.8A Pending CN104376577A (en) 2014-10-21 2014-10-21 Multi-camera multi-target tracking algorithm based on particle filtering

Country Status (1)

Country Link
CN (1) CN104376577A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778690A (en) * 2015-04-02 2015-07-15 中国电子科技集团公司第二十八研究所 Multi-target positioning method based on camera network
CN105426820A (en) * 2015-11-03 2016-03-23 中原智慧城市设计研究院有限公司 Multi-person abnormal behavior detection method based on security monitoring video data
CN105872477A (en) * 2016-05-27 2016-08-17 北京旷视科技有限公司 Video monitoring method and system
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN107481269A (en) * 2017-08-08 2017-12-15 西安科技大学 A kind of mine multi-cam moving target continuous tracking method
WO2018209934A1 (en) * 2017-05-19 2018-11-22 清华大学 Cross-lens multi-target tracking method and apparatus based on space-time constraints
CN109993798A (en) * 2019-04-09 2019-07-09 上海肇观电子科技有限公司 Method, equipment and the storage medium of multi-cam detection motion profile
CN110290287A (en) * 2019-06-27 2019-09-27 上海玄彩美科网络科技有限公司 Multi-cam frame synchornization method
CN111182210A (en) * 2019-12-31 2020-05-19 浙江大学 Binocular analysis double-tripod-head multi-target tracking camera
CN111486820A (en) * 2019-01-25 2020-08-04 学校法人福冈工业大学 Measurement system, measurement method, and storage medium
CN112200841A (en) * 2020-09-30 2021-01-08 杭州海宴科技有限公司 Cross-domain multi-camera tracking method and device based on pedestrian posture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080259163A1 (en) * 2007-04-20 2008-10-23 General Electric Company Method and system for distributed multiple target tracking
CN101751677A (en) * 2008-12-17 2010-06-23 中国科学院自动化研究所 Target continuous tracking method based on multi-camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080259163A1 (en) * 2007-04-20 2008-10-23 General Electric Company Method and system for distributed multiple target tracking
CN101751677A (en) * 2008-12-17 2010-06-23 中国科学院自动化研究所 Target continuous tracking method based on multi-camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FRANCISCO MADRIGAL ET AL: "Multiple view, multiple target tracking with principal axis-based data association", 《2011 8TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE(AVSS)》 *
WEIMING HU ET AL: "Principal Axis-Based Correspondence between Multiple Cameras for People Tracking", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
ZIA KHAN ET AL: "An MCMC-based Particle Filter for Tracking Multiple Interacting Targets", 《EUROPEAN CONF. OF COMPUTER VISION-ECCV 2004》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778690A (en) * 2015-04-02 2015-07-15 中国电子科技集团公司第二十八研究所 Multi-target positioning method based on camera network
CN105426820A (en) * 2015-11-03 2016-03-23 中原智慧城市设计研究院有限公司 Multi-person abnormal behavior detection method based on security monitoring video data
CN105426820B (en) * 2015-11-03 2018-09-21 中原智慧城市设计研究院有限公司 More people's anomaly detection methods based on safety monitoring video data
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN107093171B (en) * 2016-02-18 2021-04-30 腾讯科技(深圳)有限公司 Image processing method, device and system
US10672140B2 (en) 2016-05-27 2020-06-02 Beijing Kuangshi Technology Co., Ltd. Video monitoring method and video monitoring system
CN105872477A (en) * 2016-05-27 2016-08-17 北京旷视科技有限公司 Video monitoring method and system
CN105872477B (en) * 2016-05-27 2018-11-23 北京旷视科技有限公司 video monitoring method and video monitoring system
WO2018209934A1 (en) * 2017-05-19 2018-11-22 清华大学 Cross-lens multi-target tracking method and apparatus based on space-time constraints
CN107481269A (en) * 2017-08-08 2017-12-15 西安科技大学 A kind of mine multi-cam moving target continuous tracking method
CN111486820A (en) * 2019-01-25 2020-08-04 学校法人福冈工业大学 Measurement system, measurement method, and storage medium
JP2020118614A (en) * 2019-01-25 2020-08-06 学校法人福岡工業大学 Measuring system, measuring method, and measuring program
CN109993798A (en) * 2019-04-09 2019-07-09 上海肇观电子科技有限公司 Method, equipment and the storage medium of multi-cam detection motion profile
CN109993798B (en) * 2019-04-09 2021-05-28 上海肇观电子科技有限公司 Method and equipment for detecting motion trail by multiple cameras and storage medium
CN110290287A (en) * 2019-06-27 2019-09-27 上海玄彩美科网络科技有限公司 Multi-cam frame synchornization method
CN110290287B (en) * 2019-06-27 2022-04-12 上海玄彩美科网络科技有限公司 Multi-camera frame synchronization method
CN111182210A (en) * 2019-12-31 2020-05-19 浙江大学 Binocular analysis double-tripod-head multi-target tracking camera
CN112200841A (en) * 2020-09-30 2021-01-08 杭州海宴科技有限公司 Cross-domain multi-camera tracking method and device based on pedestrian posture

Similar Documents

Publication Publication Date Title
CN104376577A (en) Multi-camera multi-target tracking algorithm based on particle filtering
Wei et al. A vision and learning-based indoor localization and semantic mapping framework for facility operations and management
Dewan et al. Motion-based detection and tracking in 3d lidar scans
EP2858008B1 (en) Target detecting method and system
Boniardi et al. Robot localization in floor plans using a room layout edge extraction network
US20140169639A1 (en) Image Detection Method and Device
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
WO2021031954A1 (en) Object quantity determination method and apparatus, and storage medium and electronic device
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
KR101953626B1 (en) Method of tracking an object based on multiple histograms and system using the method
Papaioannou et al. Tracking people in highly dynamic industrial environments
Yahiaoui et al. A people counting system based on dense and close stereovision
Sanchez-Matilla et al. A predictor of moving objects for first-person vision
CN112562005A (en) Space calibration method and system
Li et al. Lane marking quality assessment for autonomous driving
Raza et al. Framework for estimating distance and dimension attributes of pedestrians in real-time environments using monocular camera
Efrat et al. Semi-local 3d lane detection and uncertainty estimation
Akai Mobile robot localization considering uncertainty of depth regression from camera images
von Rueden et al. Street-map based validation of semantic segmentation in autonomous driving
RU2534827C2 (en) Method for video surveillance of open space with fire hazard monitoring
Ibisch et al. Arbitrary object localization and tracking via multiple-camera surveillance system embedded in a parking garage
Lim et al. Integrated position and motion tracking method for online multi-vehicle tracking-by-detection
Arnaud et al. Partial linear gaussian models for tracking in image sequences using sequential monte carlo methods
Bravo et al. Outdoor vacant parking space detector for improving mobility in smart cities
Dong et al. Ellipse regression with predicted uncertainties for accurate multi-view 3d object estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150225