CN102243765A - Multi-camera-based multi-objective positioning tracking method and system - Google Patents

Multi-camera-based multi-objective positioning tracking method and system Download PDF

Info

Publication number
CN102243765A
CN102243765A CN2011101171193A CN201110117119A CN102243765A CN 102243765 A CN102243765 A CN 102243765A CN 2011101171193 A CN2011101171193 A CN 2011101171193A CN 201110117119 A CN201110117119 A CN 201110117119A CN 102243765 A CN102243765 A CN 102243765A
Authority
CN
China
Prior art keywords
prime
homography matrix
tracking
visual angles
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011101171193A
Other languages
Chinese (zh)
Inventor
姜明新
李敏
赵继印
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Nationalities University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Nationalities University filed Critical Dalian Nationalities University
Priority to CN2011101171193A priority Critical patent/CN102243765A/en
Publication of CN102243765A publication Critical patent/CN102243765A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a multi-camera-based multi-objective positioning tracking method. The method is characterized by comprising the following steps: installing a plurality of cameras at a plurality of visual angles firstly, planning a public surveillance area for the cameras, and calibrating a plurality of height levels; sequentially implementing the steps of foreground extraction, homography matrix calculation, foreground likelihood fusion and multi-level fusion; extracting positioning information which is based on selected a plurality of height levels and obtained in the step of foreground likelihood fusion; processing the positioning information of each level by using the shortest path algorithm so as to obtain the tracking paths of the levels; and after combining with the processing results of foreground extraction, completing the multi-objective three-dimensional tracking. By using the method disclosed by the invention, in the process of tracking, the vanishing points of the plurality of cameras are not required to be calculated, and a codebook model is introduced for the first time for solving the multi-objective tracking problem, thereby improving the accuracy of tracking; and the method has the characteristics of good stability, good instantaneity and high precision.

Description

Multiple goal positioning and tracing method and system based on polyphaser
Technical field
The present invention relates to a kind of multiple goal positioning and tracing method and system based on polyphaser.
Background technology
Multiple target tracking is a hot research problem of computer vision field, is prerequisite and basis that the moving target behavior is understood, all has a wide range of applications in fields such as robot navigation, video understanding, intelligent monitorings.Multiple target tracking is in the specific implementation process, and the most unmanageable is exactly that multiple goal is frequently blocked each other.In this case, a foreground area may belong to a plurality of targets.According to traditional color distribution, information such as shape, we may realize moving target is monitored accurately and followed the tracks of hardly.In recent years, a lot of scholars have done the research of this respect, and the method that is proposed can roughly be divided into two classes: one camera is followed the tracks of and the polyphaser tracking.
Though the tracking based on one camera deals with fairly simple, but because the target in the 3d space has been lost a lot of information in the 2D imaging process, only go monitoring and tracking target to be difficult to well solve occlusion issue, and then can't obtain tracking effect accurately from a visual angle.Therefore, the polyphaser tracking more and more is subjected to researchist's attention, the process that polyphaser is followed the tracks of is to utilize a plurality of cameras from different visual angles a plurality of targets independently to be monitored and followed the tracks of, and the information at each visual angle of effective fusion solves the problem of blocking then, reaches the purpose of tenacious tracking.
Based on the multi-object tracking method of moving target monitoring, the method that is proposed belongs to the one camera tracking in the publication number CN101887587 video monitoring, is taking place under the situation of blocking fully, follows the tracks of stable inadequately.This method moving target monitored results of being based on one camera fully realizes following the tracks of in addition, and the result of moving target monitoring under many circumstances can be inaccurate, and that will inevitably cause the inaccurate of tracking.The method that publication number CN101154289A follows the tracks of based on the 3 d human motion of many orders camera, the motion tracking in the human body three-dimensional skeleton joint that this patent is mainly set forth, the problem that solves with our method is different.
People such as Khan S. on IEEE Transactions on Pattern Analysis and Machine Intelligence, published thesis in 2009 " Tracking Multiple Occluding People by Localizing on Multiple Scene Planes " a kind of polyphaser tracking that does not need to demarcate fully camera has been proposed, realize multiobject tracking based on the homography between polyphaser restriction.
But said method utilizes mixed Gauss model that background is carried out modeling, and through the problem of regular meeting generation cavity and shade, the failure of target monitoring bring very big influence can for follow-up tracking.This method need be calculated each camera, and latent disappearing just can obtain homography matrix between various visual angles, the latent calculating that disappears a little is the process of a more complicated, and the latent error of calculation that disappears a little will cause the miscount of homography matrix, and then causes the failure to multiple target tracking.The another one problem is that this method is utilized figure hugger opinion that the locating information in 10 layers in space is handled to realize following the tracks of, and causes this method at all can't requirement of real time.These three significant disadvantage have limited its application in the middle of engineering greatly.
Summary of the invention
A kind of multiple goal positioning and tracing method based on various visual angles comprises the steps:
At first a plurality of cameras are installed and delimited the common monitoring zone of described a plurality of cameras, demarcate a plurality of height layers in various visual angles;
The foreground extraction step adopts the code book model that the video image of gathering is carried out background modeling, adopts the background subtraction method to obtain the prospect likelihood image of each visual angle video image;
The homography matrix calculation procedure in conjunction with the center position of a plurality of marks on the differing heights layer of described demarcation, is calculated the homography matrix between the various visual angles on the differing heights layer of demarcating;
Prospect likelihood fusion steps, a visual angle in the selected various visual angles, as the reference visual angle, utilize the homography matrix of spending layer between described various visual angles based on each, the prospect likelihood image at other visual angle that described foreground extraction step is extracted, be mapped in the reference viewing angle, obtain the prospect likelihood fused images at a plurality of visual angles;
The multilayer fusion steps, extraction obtains through prospect likelihood fusion steps, based on the locating information of selected a plurality of height layers, utilize shortest path first, handle the locating information of each layer, obtain the multilayer pursuit path, in conjunction with the result of prospect monitoring, finish multiobject three-dimensional tracking.
Background subtraction method in the described foreground extraction step, the operating process of background subtraction method is as follows:
In the objective definition monitor procedure, newly importing pixel is x t=(B), its corresponding code book is M for R, G,
Step 1 is calculated the brightness I=R+G+B of current pixel, definition Boolean variable match=0, and give threshold value variable ε assignment;
Step 2 finds corresponding code word C in code book M m, if can find corresponding code word C mThen be judged to be background image, reduced, can find corresponding codewords C mCriterion as follows:
C. pixel x tWith the color similarity degree of certain code word greater than detection threshold ε
The color similarity degree is defined as colordist (x t, v m), for the new constantly pixel x that imports of t t
colordist ( x t , v m ) = | | x t | | 2 | | v m | | 2 - < x t , v m > 2 | | v m | | 2
Wherein || x t|| 2=R 2+ G 2+ B 2,
Figure BDA0000059735310000022
Figure BDA0000059735310000023
Wherein the i value is 1,2 ... N, R, G, B are the corresponding value in R, G in the video, the B passage,
Figure BDA0000059735310000031
For all over after getting the i value, the mean value of corresponding R, G, B passage;
D. pixel x tBrightness in the brightness range of this code word
Brightness changes in the moving target monitoring a scope, and for each code word, its scope is Wherein
Figure BDA0000059735310000033
Be respectively minimum value and maximal value that brightness changes.
In the described homography matrix calculation step, homography matrix is defined as follows:
From a described N camera, the video image that any 2 cameras are taken is designated as I respectively i(i=1,2 ... N) and I j(j=1,2 ... N) in order to guarantee the existence of homography, two cameras must be taken same zone on the reference planes, make X be on the π of plane more arbitrarily, X is at I iAnd I jIn picture m respectively k=(x k, y k) and m ' k=(x ' k, y ' k), k=1,2 ..m * n, m * n are the resolution of each visual angle capture video.Define one 3 * 3 matrix:
H i &pi; j = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1
Make
Figure BDA0000059735310000035
Matrix
Figure BDA0000059735310000036
Be called the homography matrix between two cameras that plane π induces, abbreviate the homography matrix of plane π as, corresponding conversion is called the homography conversion, utilizes homography matrix
Figure BDA0000059735310000037
Can obtain another as the corresponding point on the plane, homography matrix from one as the point on the plane
Figure BDA0000059735310000038
Be a homogeneous invertible matrix, have 8 degree of freedom.
For other selected height aspect of removing reference planes, the homography conversion process is as follows:
Be made as φ and be a plane in 2 planes that are parallel to reference planes, I iPicture plane for camera i.By plane φ induce any two the picture planar I iAnd I jBetween homography matrix be designated as Described homography matrix
Figure BDA00000597353100000310
Have 8 degree of freedom, need 4 pairs of character pair points, matrix
Figure BDA00000597353100000311
As follows
H i &phi; j = h 11 &prime; h 12 &prime; h 13 &prime; h 21 &prime; h 22 &prime; h 23 &prime; h 31 &prime; h 32 &prime; 1
m k &prime; = H i &phi; j m k , That is: x k &prime; y k &prime; 1 = h 11 &prime; h 12 &prime; h 13 &prime; h 21 &prime; h 22 &prime; h 23 &prime; h 31 &prime; h 32 &prime; 1 x k y k 1
Described multilayer fusion steps has step by step following:
A. on single height layer, utilize the connected region monitoring, obtain a plurality of locating information agglomerates of present frame, and calculate the distance of barycenter and each barycenter of former frame of described each agglomerate, get the target agglomerate of the shortest agglomerate of distance as coupling;
B. contrast the area of different agglomerates on described selected each height layer, choose the agglomerate of area maximum, as the elevation information that the length and width information of the three-dimensional frame of video frequency tracking is write down in conjunction with the foreground extraction unit, generate three-dimensional frame, the lock onto target of target, finish tracking to target.
Be a plurality of cameras of various visual angles distribution and the common monitoring zone of described a plurality of cameras, described common monitoring zone has selected a plurality of height layers;
Foreground extracting module adopts the code book model that the video image of gathering is carried out background modeling, adopts the background subtraction method to obtain the prospect likelihood image of each visual angle video image;
The homography matrix computing module in conjunction with the center position of a plurality of marks on the differing heights layer of described demarcation, calculates the homography matrix between each visual angle on the differing heights layer of demarcating;
Prospect likelihood Fusion Module, a visual angle in the selected various visual angles, as the reference visual angle, utilize between described various visual angles homography matrix based on multilayer, the prospect likelihood image at other visual angle that described foreground extraction step is extracted, be mapped in the reference viewing angle, obtain the prospect likelihood fused images at a plurality of visual angles;
The multilayer Fusion Module, extraction obtains through prospect likelihood fusion steps, based on the locating information of selected a plurality of height layers, utilizes shortest path first, handles the locating information of each layer;
Tracking module obtains the multilayer pursuit path, in conjunction with the result of prospect monitoring, finishes multiobject three-dimensional tracking.
Described prospect likelihood integrated unit, coordinate in conjunction with the many marks central point that is positioned at the differing heights layer, calculate the homography matrix between each visual angle on the differing heights layer, an equipment in a plurality of video monitoring equipments of selected described formation various visual angles is as the reference visual angle, according between the described various visual angles that calculate based on the homography matrix of each aspect, prospect likelihood image with other visual angle is mapped in the reference viewing angle, obtains the prospect likelihood fused images at a plurality of visual angles.
This method does not need to calculate the latent of polyphaser and disappears a little in tracing process, greatly reduce the complexity of calculating, has improved the accuracy of homography matrix simultaneously yet.Introduce the code book model first and solve the multiple target tracking problem, improved the accuracy of following the tracks of.This method positions multiple goal perpendicular to three layers of ground in the space, utilizes shortest path first's algorithm process multilayer locating information, realizes multiobject tracking, has significantly improved the computing velocity of this method.The monocular track algorithm can't obtain Three-dimension Target information, utilizes the information fusion between polyphaser can realize following the tracks of multiple goal with three-dimensional frame.In sum, the present invention proposes to have good stability based on the multi-object tracking method of polyphaser, and real-time is good, the characteristics that precision is high.
Description of drawings
Fig. 1 is a structure diagram of the present invention,
Fig. 2 is the structure diagram of video image handling part of the present invention,
Fig. 3 is a process flow diagram of the present invention,
Among the figure: 1. video image acquisition unit, 2 Video processing portions, 3 storage parts, 201. foreground extraction unit, 202 prospect likelihood integrated units, 203. multilayer integrated units, 204. tracking cell.
Embodiment
Fig. 1 is a structured flowchart of the present invention, as shown in Figure 1:
A kind of multiple goal locating and tracking system based on polyphaser, comprise image acquiring unit 1, comprise the video monitoring regional of a demarcation, the common shooting area of a plurality of cameras promptly is set, choose parallel differing heights on the direction perpendicular to ground three layers, on each layer, place mark post.Concrete performing step is as follows: place 4 mark posts perpendicular to ground in photographed scene, on identical height, as a preferred implementation, mark on mark post with red, be convenient to video equipment identification.
Storage part 3, the data that arrive in order to storage of collected.
And a plurality of cameras a plurality of dispersions, that be a plurality of different visual angles, in order to guarantee the accuracy of video monitoring, the present invention adopts the video monitoring equipment more than three.
Fig. 2 is the structured flowchart of invention Video processing portion, as shown in Figure 2:
Video processing of the present invention portion comprises 2:
Foreground extraction unit 201 is handled the data that the collection of video image acquisition unit comes, and isolates the prospect likelihood image that each video monitoring equipment collects, and with the prospect likelihood image that extracts, is sent to prospect likelihood integrated unit 202.
Described prospect likelihood integrated unit 202, main homography matrix and selected reference viewing angle of being responsible between each video monitoring equipment of computing, on each selected height aspect, the prospect likelihood image that other visual angle is collected is according to this homography matrix, be mapped to reference viewing angle, also adopt same processing for other selected in advance two height layers.
Described multilayer integrated unit 203 extracts the prospect likelihood fused images based on selected differing heights layer, utilizes shortest path first to handle the multilayer locating information,
Tracking module 204 obtains the multilayer pursuit path, in conjunction with the result of foreground detection, finishes multiobject three-dimensional tracking.
Fig. 3 is a process flow diagram of the present invention
A kind of as shown in Figure 3 multiple goal positioning and tracing method based on various visual angles has following steps:
Image acquisition step comprises the video monitoring regional of demarcation a plurality of cameras being set promptly, at least three cameras, common shooting area, three layers of choosing the differing heights that is parallel to ground on the direction perpendicular to ground.Concrete performing step is as follows: place 4 mark posts perpendicular to ground in photographed scene, on selected identical height, as a preferred implementation, mark on mark post with red, be convenient to video equipment identification.
The foreground extraction step adopts the code book model that the video image of gathering is carried out background modeling, adopts the background subtraction method to obtain the prospect likelihood image of each visual angle video image; The method of following list of references record is used in the foundation of described code book model, repeats no more here.
1 one kinds of monitor video moving target policing algorithm periodicals " computer engineering " 2007/07/20 of documents based on code book
Documents 2Kim K, Chalidabhongse T H, Harwood D, Davi s L.Real-Time foreground-background segmentation using codebook model[J], Real-Time Imaging, 2005,11 (3): 167-256.
The concrete steps of described background subtraction operation are as follows
New input pixel is x in the hypothetical target monitor procedure t=(B), its corresponding code book is M for R, G, as a preferred implementation, and background subtraction operation BGS (x t) be divided into for three steps:
Step 1. is calculated the brightness I=R+G+B of current pixel, and wherein R, G, B are red in the video, green and blue brightness value.Definition Boolean variable match=0, and give threshold value variable ε assignment.
Step 2. is found out the code word C that is complementary with current pixel according to following two conditions from code book M mIf can find code word C m, match=1 then, otherwise match=0 judges current pixel and code word C mWhether mate, need satisfy following 2 conditions:
Condition 1
The color similarity degree is defined as colordist (x t, v m), for the new constantly pixel x that imports of t t
colordist ( x t , v m ) = | | x t | | 2 | | v m | | 2 - < x t , v m > 2 | | v m | | 2
Wherein || x t|| 2=R 2+ G 2+ B 2,
Figure BDA0000059735310000062
Figure BDA0000059735310000063
Wherein the i value is 1,2 ... N, R, G, B are the corresponding value in R, G in the video, the B passage,
Figure BDA0000059735310000064
For all over after getting the i value, the mean value of corresponding R, G, B passage;
Condition 2:
For judgement prospect and background, brightness changes in the moving target monitoring a scope, for each code word, when
Figure BDA0000059735310000071
The time, Wherein:
Figure BDA0000059735310000073
Figure BDA0000059735310000074
In the following formula
Figure BDA0000059735310000075
Be the code word C in the corresponding code book mBrightness value, I Hi, I LowBrightness range for whole code book.The span 0.4-0.7 of α, the span of β is 1.1-1.5
Step 3. is judged the foreground moving object pixel:
BGS ( x t ) = foreground match = 0 background match = 1
Foreground is that foreground elements, background are background element.
If match=0 judges that then object pixel is a prospect, it is background element that match=1 then is judged to be object pixel, and is reduced, if can not find corresponding code word in code book, then thinks foreground pixel, is kept.
Described homography matrix calculation step
The calculation step of homography matrix is described, the notion of the homography matrix between various visual angles is: establishing π is the georeferencing plane of not passing through two arbitrary photocentres of camera, selectively the plane is as the reference plane in experiment for we, and the image of two camera shootings (being called for short as the plane) is designated as I respectively iAnd I j. in order to guarantee the existence of homography, two cameras must be taken same zone on the reference planes. make X be on the π of plane more arbitrarily, X is at I iAnd I jIn picture m respectively k=(x k, y k) and m ' k=(x ' k, y ' k), the span of i, j is 1,2 here ..., N, N are the number of camera, k=1, and 2 ..m * n, m * n are the resolution of each visual angle capture video.We define one 3 * 3 matrix:
H i &pi; j = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1
Make
Figure BDA0000059735310000078
Satisfy the matrix of following formula Be called the homography matrix of (or two picture interplanars) between two cameras that plane π induces, or abbreviate the homography matrix of plane π as, corresponding conversion is called the homography conversion.Utilize homography matrix
Figure BDA00000597353100000710
Can obtain another as the corresponding point on the plane from one as the point on the plane.Homography matrix Be a homogeneous invertible matrix, have 8 degree of freedom.
As a preferred implementation, 3 layers of the height layer positions of choosing among the present invention, it is a plane that is parallel in 2 planes of reference planes that height layer is established φ, I iPicture plane for camera i.By plane φ induce any two the picture planar I iAnd I jBetween homography matrix be designated as
Figure BDA00000597353100000712
We as can be seen
Figure BDA00000597353100000713
Have 8 degree of freedom,, just can calculate homography matrix based on each layer if we can find 4 pairs of characteristic of correspondence points on each layer.In experimentation, we place 4 poles in the environment of taking, 4 mark posts are set respectively on 3 layers of differing heights, utilize the center of mark post in image to calculate the homography matrix of each layer.Through experimental demonstration, this method has been avoided the latent complicated processes that disappears a little of monitoring, has improved calculating
Figure BDA0000059735310000081
Accuracy, Matrix is as follows:
H i &phi; j = h 11 &prime; h 12 &prime; h 13 &prime; h 21 &prime; h 22 &prime; h 23 &prime; h 31 &prime; h 32 &prime; 1
Make Set up, that is: x k &prime; y k &prime; 1 = h 11 &prime; h 12 &prime; h 13 &prime; h 21 &prime; h 22 &prime; h 23 &prime; h 31 &prime; h 32 &prime; 1 x k y k 1 .
After calculating the homography matrix between various visual angles, the step that the various visual angles likelihood is merged is as follows:
Suppose to choose N camera, the visual angle of selecting j camera based on the homography mapping relations of multilayer, utilizes formula as the reference visual angle between the utilization various visual angles
Figure BDA0000059735310000086
Prospect likelihood information L with other N-1 visual angle i(m k) being mapped to reference viewing angle j, the foreground likelihood function after obtaining shining upon is for being designated as
Figure BDA0000059735310000087
Prospect likelihood image with N visual angle merges in reference viewing angle then, and the likelihood function of the image after the fusion is:
L &Sigma; ( m k &prime; ) = 1 N { L j ( m k &prime; ) + &Sigma; i = 1 N - 1 L i &phi; j ( m k &prime; ) }
(i in the formula, the j value is 1,2 ... N, N are the camera number; K=1,2 ..m * n)
Merge figure based on the prospect likelihood of multilayer between promptly obtaining from various visual angles, the high bright spot in the fused images on each layer is the locating information of multiple goal on multilayer.
Described fusion steps is extracted the locating information based on selected height layer, utilizes shortest path first's algorithm process multilayer locating information.Concrete process is: after obtaining the fusion figure of reference viewing angle, we have obtained in everyone positional information on each layer, because we use a plurality of video cameras, therefore guaranteeing that homography matrix is accurately under the situation, the fusion figure of reference viewing angle has made equivalent foreground segmentation, the possibility of result that obtains is understood the noise point of some, utilizes the morphology opening and closing operations to do pre-service earlier.Processing procedure with one deck locating information is an example, and other two-layer locating information processing procedures are identical.The process of shortest path first's algorithm is as follows: any frame in the selected video image utilizes the connected region monitoring can obtain the locating information agglomerate R of this frame target i as present frame i, calculate R iBarycenter
Figure BDA0000059735310000089
The horizontal ordinate of barycenter is respectively:
x &OverBar; = 1 A &Sigma; ( x , y ) &Element; R i x
y &OverBar; = 1 A &Sigma; ( x , y ) &Element; R i y
Wherein A is agglomerate R iArea.Next, calculate the distance at each barycenter of present frame and each center of former frame
Figure BDA0000059735310000093
Figure BDA0000059735310000094
Get the target agglomerate of the agglomerate of centroid distance minimum as coupling.
Contrast the area of different agglomerates on described selected each height layer, choose the agglomerate of area maximum, as the elevation information that the length and width information of the three-dimensional frame of video frequency tracking is write down in conjunction with the foreground extraction unit, generate three-dimensional frame, the lock onto target of target, finish tracking to target.
The above; only be the preferable embodiment of the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; be equal to replacement or change according to technical scheme of the present invention and inventive concept thereof, all should be encompassed within protection scope of the present invention.

Claims (6)

1. the multiple goal positioning and tracing method based on polyphaser is characterized in that comprising the steps:
At first a plurality of cameras are installed, delimited the common monitoring zone of described a plurality of cameras, demarcate a plurality of height layers in various visual angles;
The foreground extraction step adopts the code book model that the video image of gathering is carried out background modeling, adopts the background subtraction method to obtain the prospect likelihood image of each visual angle video image;
The homography matrix calculation procedure in conjunction with the center position of a plurality of marks on the differing heights layer of described demarcation, is calculated the homography matrix between the various visual angles on the differing heights layer of demarcating;
Prospect likelihood fusion steps, a visual angle in the selected various visual angles is as the reference visual angle, utilize the homography matrix of spending layer between described various visual angles based on each, the prospect likelihood image at other visual angle that described foreground extraction step is extracted, be mapped in the reference viewing angle, obtain the prospect likelihood fused images at a plurality of visual angles;
The multilayer fusion steps, extraction obtains through prospect likelihood fusion steps, based on the locating information of selected a plurality of height layers, utilize shortest path first, handle the locating information of each layer, obtain the multilayer pursuit path, in conjunction with the result of foreground extraction, finish multiobject three-dimensional tracking.
2. the multiple goal positioning and tracing method based on various visual angles according to claim 1, its feature also is: background subtraction method in the described foreground extraction step, the operating process of background subtraction method is as follows:
In the objective definition monitor procedure, newly importing pixel is x t=(B), its corresponding code book is M for R, G,
Step 1 is calculated the brightness I=R+G+B of current pixel, definition Boolean variable match=0, and give threshold value variable ε assignment;
Step 2 finds corresponding code word C in code book M m, if can find corresponding code word C mThen be judged to be background image, reduced, can find corresponding codewords C mCriterion as follows:
A. pixel x tWith the color similarity degree of certain code word greater than detection threshold ε
The color similarity degree is defined as colordist (x t, v m), for the new constantly pixel x that imports of t t
colordist ( x t , v m ) = | | x t | | 2 | | v m | | 2 - < x t , v m > 2 | | v m | | 2
Wherein || x t|| 2=R 2+ G 2+ B 2,
Figure FDA0000059735300000012
Figure FDA0000059735300000013
Wherein the i value is 1,2 ... N, R, G, B are the corresponding value in R, G in the video, the B passage,
Figure FDA0000059735300000014
For all over after getting the i value, the mean value of corresponding R, G, B passage;
B. pixel x tBrightness in the brightness range of this code word
Brightness changes in the moving target monitoring a scope, and for each code word, its scope is
Figure FDA0000059735300000015
Wherein
Figure FDA0000059735300000021
Be respectively minimum value and maximal value that brightness changes.
3. the multiple goal positioning and tracing method based on various visual angles according to claim 1, its feature also is: in the described homography matrix calculation step, homography matrix is defined as:
From a described N camera, the video image that any 2 cameras are taken is designated as I respectively i(i=1,2 ... N) and I j(j=1,2 ... N) in order to guarantee the existence of homography, two cameras must be taken same zone on the reference planes, make X be on the π of plane more arbitrarily, X is at I iAnd I jIn picture m respectively k=(x k, y k) and m ' k=(x ' k, y ' k), k=1,2 ..m * n, m * n are the resolution of each visual angle capture video, define one 3 * 3 matrix:
H i &pi; j = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1
Make
Figure FDA0000059735300000023
Matrix
Figure FDA0000059735300000024
Be called the homography matrix between two cameras that plane π induces, abbreviate the homography matrix of plane π as, corresponding conversion is called the homography conversion, and one multiplies each other as the point on the plane and homography matrix and to utilize homography matrix
Figure FDA0000059735300000025
Obtain another as the corresponding point on the plane, homography matrix from one as the point on the plane
Figure FDA0000059735300000026
Be a homogeneous invertible matrix, have 8 degree of freedom;
For other selected height aspect of removing reference planes, the homography conversion process is as follows:
Be made as φ and be a plane in 2 planes that are parallel to reference planes, I iPicture plane for camera i.By plane φ induce any two the picture planar I iAnd I jBetween homography matrix be designated as
Figure FDA0000059735300000027
Described homography matrix
Figure FDA0000059735300000028
Have 8 degree of freedom, need 4 pairs of character pair points, matrix
Figure FDA0000059735300000029
As follows:
H i &phi; j = h 11 &prime; h 12 &prime; h 13 &prime; h 21 &prime; h 22 &prime; h 23 &prime; h 31 &prime; h 32 &prime; 1
m k &prime; = H i &phi; j m k , That is: x k &prime; y k &prime; 1 = h 11 &prime; h 12 &prime; h 13 &prime; h 21 &prime; h 22 &prime; h 23 &prime; h 31 &prime; h 32 &prime; 1 x k y k 1 .
4. the multiple goal positioning and tracing method based on various visual angles according to claim 1, its feature also is: described multilayer fusion steps has following several steps:
A. on single height layer, utilize the connected region monitoring, obtain a plurality of locating information agglomerates of present frame, and calculate the distance of barycenter and each barycenter of former frame of described each agglomerate, get the target agglomerate of the shortest agglomerate of distance as coupling;
B. contrast the area of different agglomerates on described selected each height layer, choose the agglomerate of area maximum, as the elevation information that the length and width information of the three-dimensional frame of video frequency tracking is write down in conjunction with the foreground extraction unit, generate three-dimensional frame, the lock onto target of target, finish tracking to target.
One kind based on various visual angles the multiple goal locating and tracking system, it is characterized in that having:
Be a plurality of cameras of various visual angles distribution and the common monitoring zone of described a plurality of cameras, described common monitoring zone has selected a plurality of height layers;
Foreground extracting module adopts the code book model that the video image of gathering is carried out background modeling, adopts the background subtraction method to obtain the prospect likelihood image of each visual angle video image;
The homography matrix computing module in conjunction with the center position of a plurality of marks on the differing heights layer of described demarcation, calculates the homography matrix between each visual angle on the differing heights layer of demarcating;
Prospect likelihood Fusion Module, a visual angle in the selected various visual angles, as the reference visual angle, the prospect likelihood image at other visual angle that described foreground extraction step is extracted is mapped in the reference viewing angle, obtains the prospect likelihood fused images at a plurality of visual angles;
The multilayer Fusion Module, extraction obtains through prospect likelihood fusion steps, based on the locating information of selected a plurality of height layers, utilizes shortest path first, handles the locating information of each layer;
Tracking module obtains the multilayer pursuit path, in conjunction with the result of prospect monitoring, finishes multiobject three-dimensional tracking.
6. the multiple goal locating and tracking system based on polyphaser according to claim 5, its feature also is: described prospect likelihood integrated unit, coordinate in conjunction with the many marks central point that is positioned at the differing heights layer, calculate the homography matrix between each visual angle on the differing heights layer, an equipment in a plurality of video monitoring equipments of selected described formation various visual angles is as the reference visual angle, according between the described various visual angles that calculate based on the homography matrix of each aspect, prospect likelihood image with other visual angle, be mapped in the reference viewing angle, obtain the prospect likelihood fused images at a plurality of visual angles.
CN2011101171193A 2011-05-06 2011-05-06 Multi-camera-based multi-objective positioning tracking method and system Pending CN102243765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101171193A CN102243765A (en) 2011-05-06 2011-05-06 Multi-camera-based multi-objective positioning tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101171193A CN102243765A (en) 2011-05-06 2011-05-06 Multi-camera-based multi-objective positioning tracking method and system

Publications (1)

Publication Number Publication Date
CN102243765A true CN102243765A (en) 2011-11-16

Family

ID=44961803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101171193A Pending CN102243765A (en) 2011-05-06 2011-05-06 Multi-camera-based multi-objective positioning tracking method and system

Country Status (1)

Country Link
CN (1) CN102243765A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567722A (en) * 2012-01-17 2012-07-11 大连民族学院 Early-stage smoke detection method based on codebook model and multiple features
CN102568206A (en) * 2012-01-13 2012-07-11 大连民族学院 Video monitoring-based method for detecting cars parking against regulations
CN103139447A (en) * 2011-11-23 2013-06-05 三星泰科威株式会社 Apparatus and method for detecting object using PTZ camera
CN103177247A (en) * 2013-04-09 2013-06-26 天津大学 Target detection method fused with multi-angle information
CN103593857A (en) * 2013-11-26 2014-02-19 上海电机学院 Multi-sensor data fusion tracking system and method based on fuzzy algorithm
CN103729620A (en) * 2013-12-12 2014-04-16 北京大学 Multi-view pedestrian detection method based on multi-view Bayesian network
CN103735269A (en) * 2013-11-14 2014-04-23 大连民族学院 Height measurement method based on video multi-target tracking
CN103903250A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Target tracking algorithm based on multiple vision in motion capture system
CN103983249A (en) * 2014-04-18 2014-08-13 北京农业信息技术研究中心 Plant growth detailed-process image continuous-acquisition system
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
CN104182747A (en) * 2013-05-28 2014-12-03 株式会社理光 Object detection and tracking method and device based on multiple stereo cameras
CN104299236A (en) * 2014-10-20 2015-01-21 中国科学技术大学先进技术研究院 Target locating method based on scene calibration and interpolation combination
CN104517292A (en) * 2014-12-25 2015-04-15 杭州电子科技大学 Multi-camera high-density crowd partitioning method based on planar homography matrix restraint
CN104899894A (en) * 2014-03-05 2015-09-09 南京理工大学 Method for tracking moving object by using multiple cameras
CN104950893A (en) * 2015-06-26 2015-09-30 浙江大学 Homography matrix based visual servo control method for shortest path
CN105629225A (en) * 2015-12-30 2016-06-01 中国人民解放军信息工程大学 Multi-hypothesis target tracking method based on improved K shortest paths
CN105894022A (en) * 2016-03-30 2016-08-24 南京邮电大学 Adaptive hierarchical association multi-target tracking method
CN106652462A (en) * 2016-09-30 2017-05-10 广西大学 Illegal parking management system based on Internet
CN106843280A (en) * 2017-02-17 2017-06-13 深圳市踏路科技有限公司 A kind of intelligent robot system for tracking
CN107430680A (en) * 2015-03-24 2017-12-01 英特尔公司 Multilayer skin detection and fusion gesture matching
US9947106B2 (en) 2014-12-18 2018-04-17 Thomson Licensing Dtv Method and electronic device for object tracking in a light-field capture
CN108961293A (en) * 2018-06-04 2018-12-07 国光电器股份有限公司 A kind of method, apparatus of background subtraction, equipment and storage medium
CN109461174A (en) * 2018-10-25 2019-03-12 北京陌上花科技有限公司 Video object area tracking method and video plane advertisement method for implantation and system
CN109977853A (en) * 2019-03-25 2019-07-05 太原理工大学 A kind of mine group overall view monitoring method based on more identifiers
CN111105436A (en) * 2018-10-26 2020-05-05 曜科智能科技(上海)有限公司 Target tracking method, computer device, and storage medium
CN111275765A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Method and device for determining target GPS and camera
CN113223286A (en) * 2016-11-14 2021-08-06 深圳市大疆创新科技有限公司 Method and system for fusing multi-channel sensing data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1972370A (en) * 2005-11-23 2007-05-30 中国科学院沈阳自动化研究所 Real-time multi-target marker and centroid calculation method
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1972370A (en) * 2005-11-23 2007-05-30 中国科学院沈阳自动化研究所 Real-time multi-target marker and centroid calculation method
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Real-Time Imaging》 20051231 K. Kim et.al Real-time foreground-background segmentation using codebook model , *
K. KIM ET.AL: "Real-time foreground–background segmentation using codebook model", 《REAL-TIME IMAGING》, 31 December 2005 (2005-12-31) *
MINGXIN JIANG ET.AL: "A Robust Combined Algorithm of Object Tracking Based on Moving Object Detection", 《INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL AND INFORMATION PROCESSING 2010》, 15 August 2010 (2010-08-15) *
SAAD M. KHAN ET.AL: "Tracking Multiple Occluding People by Localizing on Multiple Scene Planes", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 31, no. 3, 31 March 2009 (2009-03-31) *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139447A (en) * 2011-11-23 2013-06-05 三星泰科威株式会社 Apparatus and method for detecting object using PTZ camera
CN103139447B (en) * 2011-11-23 2018-01-02 韩华泰科株式会社 Use the apparatus and method of ptz camera detection object
CN102568206A (en) * 2012-01-13 2012-07-11 大连民族学院 Video monitoring-based method for detecting cars parking against regulations
CN102568206B (en) * 2012-01-13 2014-09-10 大连民族学院 Video monitoring-based method for detecting cars parking against regulations
CN102567722A (en) * 2012-01-17 2012-07-11 大连民族学院 Early-stage smoke detection method based on codebook model and multiple features
CN103903250A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Target tracking algorithm based on multiple vision in motion capture system
CN104021538B (en) * 2013-02-28 2017-05-17 株式会社理光 Object positioning method and device
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
CN103177247A (en) * 2013-04-09 2013-06-26 天津大学 Target detection method fused with multi-angle information
CN103177247B (en) * 2013-04-09 2015-11-18 天津大学 A kind of object detection method merging various visual angles information
CN104182747A (en) * 2013-05-28 2014-12-03 株式会社理光 Object detection and tracking method and device based on multiple stereo cameras
CN103735269A (en) * 2013-11-14 2014-04-23 大连民族学院 Height measurement method based on video multi-target tracking
CN103735269B (en) * 2013-11-14 2015-10-28 大连民族学院 A kind of height measurement method followed the tracks of based on video multi-target
CN103593857A (en) * 2013-11-26 2014-02-19 上海电机学院 Multi-sensor data fusion tracking system and method based on fuzzy algorithm
CN103729620A (en) * 2013-12-12 2014-04-16 北京大学 Multi-view pedestrian detection method based on multi-view Bayesian network
CN103729620B (en) * 2013-12-12 2017-11-03 北京大学 A kind of multi-view pedestrian detection method based on multi-view Bayesian network
CN104899894A (en) * 2014-03-05 2015-09-09 南京理工大学 Method for tracking moving object by using multiple cameras
CN104899894B (en) * 2014-03-05 2017-09-01 南京理工大学 A kind of method that use multiple cameras carries out motion target tracking
CN103983249A (en) * 2014-04-18 2014-08-13 北京农业信息技术研究中心 Plant growth detailed-process image continuous-acquisition system
CN104299236A (en) * 2014-10-20 2015-01-21 中国科学技术大学先进技术研究院 Target locating method based on scene calibration and interpolation combination
CN104299236B (en) * 2014-10-20 2018-02-27 中国科学技术大学先进技术研究院 A kind of object localization method based on scene calibration combined interpolation
US9947106B2 (en) 2014-12-18 2018-04-17 Thomson Licensing Dtv Method and electronic device for object tracking in a light-field capture
CN104517292A (en) * 2014-12-25 2015-04-15 杭州电子科技大学 Multi-camera high-density crowd partitioning method based on planar homography matrix restraint
CN107430680B (en) * 2015-03-24 2023-07-14 英特尔公司 Multi-layer skin detection and fusion gesture matching
CN107430680A (en) * 2015-03-24 2017-12-01 英特尔公司 Multilayer skin detection and fusion gesture matching
CN104950893A (en) * 2015-06-26 2015-09-30 浙江大学 Homography matrix based visual servo control method for shortest path
CN105629225A (en) * 2015-12-30 2016-06-01 中国人民解放军信息工程大学 Multi-hypothesis target tracking method based on improved K shortest paths
CN105629225B (en) * 2015-12-30 2018-05-11 中国人民解放军信息工程大学 A kind of more hypothesis method for tracking target based on improvement K shortest paths
CN105894022A (en) * 2016-03-30 2016-08-24 南京邮电大学 Adaptive hierarchical association multi-target tracking method
CN105894022B (en) * 2016-03-30 2019-05-03 南京邮电大学 A kind of adaptive layered association multi-object tracking method
CN106652462A (en) * 2016-09-30 2017-05-10 广西大学 Illegal parking management system based on Internet
CN113223286A (en) * 2016-11-14 2021-08-06 深圳市大疆创新科技有限公司 Method and system for fusing multi-channel sensing data
CN106843280A (en) * 2017-02-17 2017-06-13 深圳市踏路科技有限公司 A kind of intelligent robot system for tracking
CN108961293A (en) * 2018-06-04 2018-12-07 国光电器股份有限公司 A kind of method, apparatus of background subtraction, equipment and storage medium
CN109461174A (en) * 2018-10-25 2019-03-12 北京陌上花科技有限公司 Video object area tracking method and video plane advertisement method for implantation and system
CN109461174B (en) * 2018-10-25 2021-01-29 北京陌上花科技有限公司 Video target area tracking method and video plane advertisement implanting method and system
CN111105436A (en) * 2018-10-26 2020-05-05 曜科智能科技(上海)有限公司 Target tracking method, computer device, and storage medium
CN111105436B (en) * 2018-10-26 2023-05-09 曜科智能科技(上海)有限公司 Target tracking method, computer device and storage medium
CN111275765A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Method and device for determining target GPS and camera
CN111275765B (en) * 2018-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 Method and device for determining target GPS and camera
CN109977853A (en) * 2019-03-25 2019-07-05 太原理工大学 A kind of mine group overall view monitoring method based on more identifiers
CN109977853B (en) * 2019-03-25 2023-07-14 太原理工大学 Underground worker panoramic monitoring method based on multiple identification devices

Similar Documents

Publication Publication Date Title
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN105405154B (en) Target object tracking based on color-structure feature
Xu et al. Cross-view people tracking by scene-centered spatio-temporal parsing
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
Shitrit et al. Tracking multiple people under global appearance constraints
Ayazoglu et al. Dynamic subspace-based coordinated multicamera tracking
CN104217428B (en) A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation
CN103530599A (en) Method and system for distinguishing real face and picture face
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
Havasi et al. Detection of gait characteristics for scene registration in video surveillance system
Tyagi et al. Kernel-based 3d tracking
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN103729861A (en) Multiple object tracking method
Chen et al. Crowd escape behavior detection and localization based on divergent centers
Xu et al. A real-time, continuous pedestrian tracking and positioning method with multiple coordinated overhead-view cameras
Jung et al. Real-time estimation of 3D scene geometry from a single image
CN106023252A (en) Multi-camera human body tracking method based on OAB algorithm
Liu et al. [Retracted] Mean Shift Fusion Color Histogram Algorithm for Nonrigid Complex Target Tracking in Sports Video
Xiong et al. Crowd density estimation based on image potential energy model
Li et al. Robust object tracking in crowd dynamic scenes using explicit stereo depth
Lo et al. Vanishing point-based line sampling for real-time people localization
Kumar et al. Person tracking with re-identification in multi-camera setup: a distributed approach
Peng et al. Continuous vehicle detection and tracking for non-overlapping multi-camera surveillance system
Ren et al. Multi-view visual surveillance and phantom removal for effective pedestrian detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20111116