CN107580199A - The target positioning of overlapping ken multiple-camera collaboration and tracking system - Google Patents
The target positioning of overlapping ken multiple-camera collaboration and tracking system Download PDFInfo
- Publication number
- CN107580199A CN107580199A CN201710806796.3A CN201710806796A CN107580199A CN 107580199 A CN107580199 A CN 107580199A CN 201710806796 A CN201710806796 A CN 201710806796A CN 107580199 A CN107580199 A CN 107580199A
- Authority
- CN
- China
- Prior art keywords
- target
- msub
- video camera
- mrow
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Closed-Circuit Television Systems (AREA)
Abstract
Target positioning and tracking system the invention discloses the collaboration of overlapping ken multiple-camera, pixel above rectangle frame to representing foreground area in side size range samples, target candidate pin point is obtained with reference to the geometrical constraint of multiple-camera, vertex of surface under the rectangle frame for the expression pedestrian being not present between target in the visual angle blocked is projected simultaneously, and calculate the center of gravity of respective subpoint, finally candidate point is weighted, the position of target is positioned using candidate's pin point after center of gravity and weighting;The present invention is compared to single camera target following, multiple-camera has monitoring range wide, the advantages that visual angle is more are observed, and multiple cameras environment constraint and multiple foreground information can be utilized to obtain the tracking effect of robustness, solve the occlusion issue occurred during tracking well;Target is remained able to the high-precision positioning of completion target and tracking even in the visual angle blocked, there is good market application value.
Description
Technical field
Patent of the present invention is related to field of intelligent video surveillance, more particularly to the target positioning of overlapping ken multiple-camera collaboration
With tracking system.
Background technology
With smart city and the fast development of intelligent transportation, intelligent Video Surveillance Technology is in the rank of fast development
Section, major contribution is made that for safety monitoring cause.But the number of monitoring camera and monitoring range are stepped up so that
Camera is installed, off-line calibration, and the analysis labor intensity of measurement and video monitoring increases.In addition, with monitoring scene
It is complicated with changeable, traditional single camera target following and localization method are faced with huge challenge in actual applications.Closely
Nian Lai, multiple-camera collaboration Video Supervision Technique caused come increasing people concern, the positioning of view-based access control model with
Tracking is gradually transitioned into multiple-camera field from single camera field, especially when being deposited between target in monitoring scene
In situation about blocking.Therefore, studying the target based on multiple-camera collaboration in intelligent video monitoring and positioning has with tracking technique
Great meaning.
In the prior art, the object space alignment system based on homography matrix and the positioning of the object space based on main shaft system
System, in the case of serious block, target in the foreground can not individual segmentation come out, may be in a foreground area comprising multiple
Target, if recycling rectangle frame to frame foreground area this when, then multiple targets may be included in this rectangle frame, this
When rectangle frame four summits can not just substitute the foreground information of any one target, or perhaps the foreground information of mistake, that
Reliable target positioning result can not be just obtained using four summits of this rectangle frame.
The prior art is defective, it is necessary to improve.
The content of the invention
In order to solve the defects of present technology is present, the invention provides the target positioning of overlapping ken multiple-camera collaboration
With tracking system.
Technology official documents and correspondence provided by the invention, the target positioning of overlapping ken multiple-camera collaboration and tracking system, its feature
It is, including target positioning subsystem and target following subsystem, the target positioning subsystem comprise the following steps:
(1) video camera C1 and video camera C2 gather foreground image respectively;
(2) detection is carried out to foreground image by Vibe detection algorithms with adding frame to handle;
(3) the head prospect in foreground image in step (2) is sampled, forms head sampled point;
(4) head sampled point is projected;
(5) subpoint and camera center subpoint are connected;
(6) candidate target pin point is calculated;
(7) to candidate target pin point weighted sum, target pin point is drawn;
(8) the chest prospect in foreground image in step (2) is sampled, forms chest sampled point;
(9) chest sampled point is projected;
(10) pin point center of gravity is calculated;
(11) summation is carried out to pin point center of gravity in target pin point in step (7) and step (10) to average;
(12) target pin point coordinate is drawn;
Target following subsystem comprises the following steps:
First, video camera C1 carries out detect and track to target, and the pin point seat of target is recorded according to target positioning subsystem
Mark;
2nd, camera FOV line is obtained;
Whether the 3rd, video camera C2 detects fresh target, judge fresh target in the video camera C1 visuals field;
The 4th, if fresh target is not in the video camera C1 visuals field, repeat step three;If fresh target is in the video camera C1 visuals field;
Then enter next step;
5th, calculate apart from the minimum target in visual field line of demarcation, and judge that to be that minimum target is no be blocked;
If the 6, minimum target is blocked, step 7 is performed;If minimum target is not blocked, step is performed
Rapid eight;
7th, multiple-camera co-located, the re-projection minimum with the distance of the tracking target pin point in video camera C1 is found
Point, the current tracking results of video camera C2 are updated, and perform step 8
8th, judge whether video camera C2 detects independent target;If video camera C2 does not detect independent target,
Perform step 7;If video camera C2 detects independent target, labell is assigned to the video camera C2 results currently tracked.
Preferably, in step (6), the sub-step for calculating candidate target is as follows:
(a) gps coordinate of the camera center in scene plane upright projection point is measured using gps receivergC1、gC2Referred to as
Camera center subpoint;
(b) by each video camera prospect, the rectangle frame both the above summit of pedestrian's foreground area is represented, is answered by single
Matrix projection obtains in scene plane
(c) above-mentioned subpoint and corresponding camera center subpoint are connected, obtains 4 straight lines in scene plane
(d) the intersection point p of above-mentioned straight line is calculated1、p2、p3、p4And using intersecting point coordinate as candidate's pin point coordinate;
Note l1, l2 are respectively two and singly answer projection line, and the end points of straight line is respectively (x0, y0), (x1, y1), (x2, y2),
(x3, y3), l1:Y=k1(x-x0)+y0, l2:Y=k2(x-x2)+y2, k1、k2It is the slope of two straight lines, two straight lines are met at a bit
(x, y),Simultaneous
It can obtain
According to above method to video cameragC1、gC2In line find intersection and can obtain candidate target pin point.
Relative to the beneficial effect of prior art, the target positioning of the overlapping ken multiple-camera collaboration of the present invention is with tracking
For system compared to single camera target following, multiple-camera has the advantages that monitoring range is wide, and observation visual angle is more, and can utilize
Multiple cameras environment constrains and multiple foreground information obtains the tracking effect of robustness, occurs well during solution tracking
Occlusion issue;Target is remained able to the high-precision positioning of completion target and tracking even in the visual angle blocked, have well
Market application value.
Brief description of the drawings
Fig. 1 is target positioning subsystem flow chart;
Fig. 2 is target following subsystem flow chart;
Fig. 3 is Vibe algorithm background modeling schematic diagrames;
Fig. 4 is pedestrian's foreground area under two visual angles that rectangle frame represents;
Fig. 5 is that candidate target pin point produces schematic diagram;
Fig. 6 is two visual angle foreground detection schematic diagrames under serious circumstance of occlusion;
Fig. 7 is that candidate target pin point produces schematic diagram;
Fig. 8 is the caused schematic diagram of pin point subpoint;
Fig. 9 is pin point center of gravity and candidate's pin point relation schematic diagram;
Figure 10 is overlapping ken target handoff schematic diagram;
Figure 11 is adjacent camera viewing area schematic diagram.
Embodiment
It should be noted that above-mentioned each technical characteristic continues to be mutually combined, the various embodiments not being enumerated above are formed,
It is accordingly to be regarded as the scope of description of the invention record;Also, for those of ordinary skills, it can add according to the above description
To improve or convert, and all these modifications and variations should all belong to the protection domain of appended claims of the present invention.
For the ease of understanding the present invention, below in conjunction with the accompanying drawings and specific embodiment, the present invention will be described in more detail.
The preferred embodiment of the present invention is given in accompanying drawing.But the present invention can realize in many different forms, and it is unlimited
In the embodiment described by this specification.On the contrary, the purpose for providing these embodiments makes to the disclosure
Understand more thorough and comprehensive.
It should be noted that when element is referred to as " being fixed on " another element, it can be directly on another element
Or there may also be element placed in the middle.When an element is considered as " connection " another element, it can be directly connected to
To another element or it may be simultaneously present centering elements.Term used in this specification " vertical ", " horizontal ",
"left", "right" and similar statement are for illustrative purposes only.
Unless otherwise defined, technology all used in this specification and scientific terminology are led with belonging to the technology of the present invention
The implication that the technical staff in domain is generally understood that is identical.Used term is simply in the description of the invention in this specification
The purpose of description specific embodiment, it is not intended to the limitation present invention.
The present invention is elaborated below in conjunction with the accompanying drawings.
As shown in figure 1, target positioning subsystem comprises the following steps:
(1) video camera C1 and video camera C2 gather foreground image respectively;
(2) detection is carried out to foreground image by Vibe detection algorithms with adding frame to handle;
(3) the head prospect in foreground image in step (2) is sampled, forms head sampled point;
(4) head sampled point is projected;
(5) subpoint and camera center subpoint are connected;
(6) candidate target pin point is calculated;
(7) to candidate target pin point weighted sum, target pin point is drawn;
(8) the chest prospect in foreground image in step (2) is sampled, forms chest sampled point;
(9) chest sampled point is projected;
(10) pin point center of gravity is calculated;
(11) summation is carried out to pin point center of gravity in target pin point in step (7) and step (10) to average;
(12) target pin point coordinate is drawn.
Vibe detection algorithms are detected to foreground image and add frame to handle
The purpose of foreground detection is the video image to video camera shooting, extracts moving object region therein.Work as target
During into video camera shooting area, video camera can catch each two field picture of the target in its within sweep of the eye motion process.To this
A little images carry out foreground detection, are partitioned into the foreground area of target.
Vibe (Visual background extracts) is a kind of foreground detection algorithm of the high Pixel-level of real-time;
Its main thought is the foundation for using single-frame images as initial background model, in combination with the phase of pixel spatial distribution
Guan Xing, a sample set is stored for each pixel, sampled value is exactly the past pixel value of the pixel and its neighbour in sample set
The pixel value of point is occupied, then is compared to judge whether new pixel belongs to the back of the body by each new pixel value and sample set
Sight spot.
Vibe detection algorithms are mainly comprising three aspects:Background model operation principle, background model initializing method, background
Model modification strategy, background model update method.
Background model operation principle:Background refers to static or the slowly object of motion image, and prospect is corresponded to and moved
The image of dynamic object.If it is known that a new observed value belongs to background dot, so it should be with the sampling in sample set
Value is relatively.It is the pixel value at x points to remember v (x);M (x)=V1, V2 ... VN }
It is regions of the R centered on x for radius for background sample collection (sample set size is N) SR (v (x)) at x, if
M(x)[{SR(v (x)) ∩ { V1, V2 ..., VN } }] it is more than given threshold value, it is considered as x points and belongs to background dot, it is specific initial
The modeling schematic diagram for changing background model is as shown in Figure 3.
Background model initializing method:Traditional Gaussian Background modeling (GMM) method needs certain length in video sequence
Sequence can just complete, real-time is relatively low.And Vibe background model initializings only need a two field picture can to complete, for
The model of each pixel in image, its sample value are obtained from vicinity points pixel value around the pixel.Due to only
The change of the value of partial pixel point is used to replace whole sample set, time complexity is low, and real-time is high.It is meanwhile this first
The robustness of beginning method under various circumstances is high.
Background model more new strategy:The method that the more new strategy of this method is conservative more new strategy and foreground point counts, i.e.,
Background model is filled without using foreground point, avoids deadlock, while pixel is counted, if the continuous n times of some pixel
Prospect is detected as, then is updated to background dot.
After the foreground area of target is obtained, pedestrian's foreground area is framed with rectangle frame, schematic diagram is as shown in Figure 4.
The generation of candidate target pin point
The Position Approximate point of so-called candidate target pin point, i.e. target pin point in scene plane.Obtaining pedestrian's foreground zone
Behind domain, there are two kinds of situations:Target has been completed to split and matched does not complete segmentation and matching with target, and the first situation is simpler
It is single, it is that foreground target has been completed to split and has been mutually matched in two visual angles, that is, knows a foreground area at another visual angle
In corresponding to be which foreground area, second of situation is more complicated, is only to obtain foreground area, excessively tight due to blocking
Weight, does not complete to split and be mutually matched.
Target has been completed to split and matched
The situation that target has been completed to split and matched is optimal situation, and now the four of rectangle frame summit can represent
All foreground informations of target.Rectangle frame both the above summit can be regarded as to the head point of pedestrian, rectangle frame following two top
Point regards the pin point of pedestrian as.If 2 head points are projected through the homography matrix between camera image plane and scene plane
Onto scene plane, and known camera center subpoint gps coordinate, connector spot projection point and camera center subpoint,
Then by multiple-camera geometrical constraint 2, this two lines should be that straight line perpendicular to the ground singly answers projection line in theory, similarly will
The head point coordinates of the pedestrian, which carries out list, in another video camera should project simultaneously line, obtain other 2 and singly answer projection line, this is several
Bar straight line meets at 4 points, and according to multiple-camera geometrical constraint 1, this 4 points can be used as candidate target pin point, candidate target pin
It is as shown in Figure 5 that point produces schematic diagram.
The step of calculating candidate target pin point with reference to Fig. 5 is as follows:
(a) gps coordinate of the camera center in scene plane upright projection point is measured using gps receivergC1、gC2Referred to as
Camera center subpoint;
(b) by each video camera prospect, the rectangle frame both the above summit of pedestrian's foreground area is represented, is answered by single
Matrix projection obtains in scene plane
(c) above-mentioned subpoint and corresponding camera center subpoint are connected, obtains 4 straight lines in scene plane
(d) the intersection point p of above-mentioned straight line is calculated1、p2、p3、p4And using intersecting point coordinate as candidate's pin point coordinate.
Note l1, l2 are respectively two and singly answer projection line, and the end points of straight line is respectively (x0, y0), (x1, y1), (x2, y2),
(x3, y3), l1:Y=k1(x-x0)+y0, l2:Y=k2(x-x2)+y2, k1、k2It is the slope of two straight lines, two straight lines are met at a bit
(x, y),Simultaneous
It can obtain
According to above method to video cameragC1、gC2In line find intersection and can obtain candidate target pin point.
Target does not complete segmentation and matching
I.e. after foreground detection, target does not have to be mutually matched without completion between complete parttion and target, rectangle circle
Foreground area firmly can not represent the foreground area of any one pedestrian, as shown in Figure 6.
In visual angle 1, two target occlusion degree are deep, and the summit of rectangle frame can not represent any one pedestrian's
Head point or pin point.If now, seek candidate target pin point according to the method for the first situation, then certainly will have cause compared with
Big position error.Because the small error on two dimensional image can be exaggerated in scene plane, such as one 20 meters
Distance, it is probably 10 pixels on the image plane, then each pixel represents 2 meters of length, that is to say, that figure
The error range that a pixel is likely to result in 2 meters in scene plane is differed in image plane, this is that our institutes are unacceptable.
In this video monitoring scene, we are only capable of obtaining the prospect of target, and it is high-precision fixed to carry out if desired
Position and tracking generally require substantial amounts of workload and complete object matching.Conventional matching algorithm is Surf feature point extractions with matching
Algorithm.Matching is typically an important job in target positioning and tracking, but matching algorithm will generally be related to largely
Nonlinear optimization procedure, there is significant limitation for real-time target is positioned and tracked.In recent years, with video monitoring model
The expansion enclosed, the increase of demand is monitored in real time, the research for non-matching target positioning and tracking is more and more.While with
The complexity of monitoring scene, also turn into a study hotspot on the target positioning under circumstance of occlusion and tracking.Most research
Focus on, by being sampled to obtain candidate target pin point to prospect, then footprint analysis is carried out to candidate target pin point, finally
Position position of the target in scene plane.
Therefore, in order to further reduce position error in this case, the present invention is directed to the mesh under serious circumstance of occlusion
The pixel that demarcation position is proposed above a kind of rectangle frame to representing prospect in side size range carries out sampling and obtains candidate target pin
The method of point.It is as shown in Figure 7 that candidate target pin point produces schematic diagram.
Wherein, (a) is the schematic perspective view for producing candidate target pin point, and dotted line represents that the head of pedestrian's foreground area will be represented
Point carry out it is single should project, vertical line singly answers projection line in the rectangle frame that solid line represents to be obtained according to multiple-camera geometrical constraint, real
Heart intersection point represents the candidate target pin point of one of target, and hollow intersection point represents the candidate target pin point of another target.
(b) figure is expression of the candidate target pin point in scene plane.
When occurring serious block between target, there was only a foreground area in a video camera prospect, and in addition
There are two foreground areas in one camera scene.Solid dot represents the candidate target pin point of one of target in Fig. 7, hollow
Point represents the candidate target pin point of another target.For the candidate target pin point of any one target, both including the target
The candidate target pin point that the sampled point of foreground area in two video cameras is formed, is also regarded including the target in a video camera
The candidate target pin point that the prospect sampled point of Yezhong and the prospect sampled point of another target or background dot are formed.Below
A kind of situation formed candidate target pin point be not it is desirable that obtain, but due to blocking than more serious, foreground segmentation
Difficulty, so temporarily retaining these candidate target pin points.
To the weighting of candidate target pin point and target positioning
After above-mentioned candidate target pin point is obtained, the problem of locus of target is one important how is positioned.In order to
The uniformity of positioning, the method for being used uniformly sampling herein obtain candidate target pin point, are then each candidate target pin point
Suitable weight is assigned to obtain final target pin point positioning.
In order to be weighed surely to each candidate target pin point, we do following analysis:
The position of candidate target pin point limited by multiple-camera geometrical constraint, but in fact, by detecting
Pin point, which carries out list, should project the approximate location that can also obtain pin point, both met that multiple-camera geometrical constraint has and met that pin point projects
Pin point position, the positioning result of degree of precision can be obtained under double constraints.Therefore, this project is that candidate target pin point is weighed surely
Rule be that the candidate pin point weight more remote apart from pin point subpoint center of gravity is smaller, and the center of gravity apart from pin point subpoint is nearer
Candidate's pin point weight is bigger.The generation schematic diagram of wherein pin point subpoint is as shown in Figure 8.
The center of gravity of pin point projection lenses point is defined as, apart from all pin point subpoints apart from the minimum scene plane of sum
Point, the pin point subpoint for remembering target are Bi j, wherein i represents i-th of video camera, and j represents j-th of pin point, and center of gravity is designated as po.Then
Wherein i=1,2 ... M, i-th of video camera is represented, j=1,2 ... N, represents at j-th point, P represents focus point
Stochastic variable, D (Bi j, P) and the distance of pin point subpoint and center of gravity variable is represented, the physical significance of whole formula represents pin point projection
The value of the point center of gravity variable corresponding when variable is apart from sum minimum immediately with pin point center of gravity.
Apart from the distance of center of gravity it is candidate according to candidate point after pin point subpoint center of gravity and candidate target pin point is obtained
The fixed power of point.For a target pin point region, it is P to make candidate pointi, i=1,2 ... S, the number of expression candidate's pin point, each
The power of candidate point is iw, then candidate point and the relation of pin point center of gravity are as shown in Figure 9.
Remember PiDistance P0Distance be | P0Pi|.OrderDistance bigger r of the candidate point apart from center of gravityiIt is smaller.For
I candidate point defines weight wiIt is as follows:
Remember that the final pin point position of target is O, then
When between target in the absence of blocking, target prospect can be good at splitting, and target pin point is all visible
, then pin point center of gravity PoCalculating used pin point in two camera coverages singly answer subpoint.When tight between target
When blocking again, what target prospect can not be independent splits, target foot be probably in a camera coverage it is sightless,
It is only visible in another video camera, now pin point center of gravity PoThe list of pin point that only includes in visible visual angle of calculating should project
Point.
Weight WiSize represent the matching degree of candidate's pin point and center of gravity, be not present between target when blocking, WiCompared with
Small candidate's pin point refers to the point near pedestrian head and foot's profile, exists between target when blocking, WiLess time
Pin point is selected there are two kinds of possibility, a kind of is the point near the pedestrian head and foot's profile, and a kind of is the foreground point of other pedestrians.Its
The foreground point of his pedestrian is lower with the matching degree of pin point center of gravity for the pedestrian head and foot's profile point, therefore
Weight is also more small.
Simultaneously as being positioned herein by the way of prospect sampling, determine compared to all foreground pixel points
Position, the efficiency of location algorithm are higher.
In this way so that even if foreground segmentation is incomplete, existing to block can also control position error certain
In the range of.Reach the purpose of multiple-camera collaboration real-time target positioning.
The problem of target handoff algorithm, describes
Under the scene shown in Figure 10, have between video camera C1 and video camera C2 it is large range of overlapping, in video camera C1
Have two targets moved freely, O1 and O2 within sweep of the eye, the two targets are entered by video camera C1 field range
Enter video camera C2 field range, also have the target moved freely a O3 within sweep of the eye in video camera C2.Assuming that now
Video camera C1 carries out lasting target following to target O1, and this target is identified as k.Over time,
Target O1 and O2 appear in video camera C2 within sweep of the eye, and are detected by video camera C2.It is now if it is desired to lasting
Target O1 is tracked, video camera C2 must just judge detected target either with or without O1, and which is O1, and it is assigned
Same k is given to identify, in order to which behind the visual field that target O1 leaves video camera C1, video camera C2 independent can be held to it
Continuous tracking.When detecting that target occurs in video camera C2, its target within the vision analyzed, determine its body
The target O2 that part is target O1 or is occurred with target O1 from same camera coverage, or new target O3.It is above-mentioned this
Kind assigns like-identified to the same target of different cameras and determines that the object tracking process of target identities is also referred to as target handoff.
Target visibility
As shown in figure 11, after the visual field line of demarcation of video camera delimited, the visual field of a video camera is divided into two
Part, as shown in figure 11,.It is invisible in another video camera that grey color part identifies the region, and white portion mark should
Region is visible in another video camera, i.e., the so-called overlapping ken.
Assuming that tracked target is P in video camera C1 middle new position coordinate:(x1,y1) video camera C2 is in video camera C1
Visual field line of demarcation be p1p2:Ax+ByThen visibility function of the target in the video camera C2 visuals field is defined as+C=0:
Q(x1,y1)>0 represents target in the visible within sweep of the eye of video camera C2, Q (x1,y1)=0 represents the lucky position of target
In on the line of demarcation of the visual field, Q (x1,y1)<0
Within sweep of the eye disappearance of the target in video camera C1 is identified, into video camera C2 within sweep of the eye.As can be seen here,
Value by judging visibility function just can determine whether observability of the target in the range of camera coverage, so as to be handed over for target
Offer reliable basis are provided.
Target handoff
When target in video camera C1 through left visual field lines of demarcation of the video camera C2 in C1 when, the observability of target is sentenced
Other function is changed into just from negative, and target enters in video camera C1 and the C2 overlapping ken, now detects the left side in video camera C2
All moving targets near Boundary, and it is most short to the distance in left side boundary line, distance to calculate each moving target pin point
Target is judged as the target to be tracked, and assigns corresponding target identification and carry out uniformity mark.The mark can be under
Column obtains:
Wherein, i, j represent that the label of video camera is video camera C1, C2, Pt iRepresent the tracking in i-th of video camera of t
Target, Pt iK-th of target in j-th of video camera is represented,The left side or right edge boundary line of j-th of video camera are represented,
It is left side boundary line in this paper experiment.When j-th of video camera detects that target enters its field range, this k will be calculated
Individual moving target is the target to be identified target discrimination corresponding to minimum value apart from minimum value apart from its field of view lines,
And label marks are assigned, complete target handoff.
As shown in Fig. 2 target following subsystem
Specific experiment scene describes:The target to field range of two video cameras independently detected and with
Track, and record the pin point coordinate of tracked target.Assuming that video camera C1 is tracking the target that a target identification is label,
The target just moves towards video camera C2 within sweep of the eye within sweep of the eye from video camera C1.In this process, the target meeting
Can be continuous in video camera C1, C2 across left visual field boundary lines of the video camera C2 in the range of video camera C1, and in the process
Appearance other targets.In order that target leave video camera C1 remain able to carry out him within sweep of the eye afterwards it is lasting with
Track is allowed video camera C2 to know and appeared in its visual field, it is necessary to realize the handing-over of target in the overlapping ken of two video cameras
Which target is to be identified as label target, and identical mark is assigned to it, then proceedes to track.
It is specific to perform step:When video camera C2 is detected within sweep of the eye and is had target to occur, judge the target whether
In the overlapping ken of adjacent camera, if not continuing detection if until detecting a target of video camera in two shootings
In the overlapped view of machine.Then the target for apart from the minimum target of visual field line of demarcation distance, being designated as mark to be allocated is calculated, and it is right
Current testing result carries out space orientation.If do not blocked, tracking result current video camera C2 is target following
As a result, mark label is assigned to it.Space orientation point is then subjected to re-projection if there is blocking, subpoint is located at video camera
On C1 and video camera the C2 plane of delineation.Find the nearest re-projection point of target that label labels are corresponded in video camera C1
Re-projection point of the corresponding spatial point in video camera C2, and replace the current tracking knots of video camera C2 using the re-projection point
Fruit.Meanwhile judge whether video camera C2 comes out the shelter target individual segmentation of above-mentioned number, if individual segmentation out if stop
Space orientation, assign label to current tracking result, complete target handoff, otherwise continue executing with object space positioning and follow-up
Replacement current tracking result the step of, until the target tracked leaves the overlapping ken of two video cameras completely.
It should be noted that above-mentioned each technical characteristic continues to be mutually combined, the various embodiments not being enumerated above are formed,
It is accordingly to be regarded as the scope of description of the invention record;Also, for those of ordinary skills, it can add according to the above description
To improve or convert, and all these modifications and variations should all belong to the protection domain of appended claims of the present invention.
Claims (2)
1. target positioning and the tracking system of overlapping ken multiple-camera collaboration, it is characterised in that including target positioning subsystem
With target following subsystem, the target positioning subsystem comprises the following steps:
(1) video camera C1 and video camera C2 gather foreground image respectively;
(2) detection is carried out to foreground image by Vibe detection algorithms with adding frame to handle;
(3) the head prospect in foreground image in step (2) is sampled, forms head sampled point;
(4) head sampled point is projected;
(5) subpoint and camera center subpoint are connected;
(6) candidate target pin point is calculated;
(7) to candidate target pin point weighted sum, target pin point is drawn;
(8) the chest prospect in foreground image in step (2) is sampled, forms chest sampled point;
(9) chest sampled point is projected;
(10) pin point center of gravity is calculated;
(11) summation is carried out to pin point center of gravity in target pin point in step (7) and step (10) to average;
(12) target pin point coordinate is drawn;
Target following subsystem comprises the following steps:
First, video camera C1 carries out detect and track to target, and the pin point coordinate of target is recorded according to target positioning subsystem;
2nd, camera FOV line is obtained;
Whether the 3rd, video camera C2 detects fresh target, judge fresh target in the video camera C1 visuals field;
The 4th, if fresh target is not in the video camera C1 visuals field, repeat step three;If fresh target is in the video camera C1 visuals field;Then enter
Enter next step;
5th, calculate apart from the minimum target in visual field line of demarcation, and judge that to be that minimum target is no be blocked;
If the 6, minimum target is blocked, step 7 is performed;If minimum target is not blocked, step 8 is performed;
7th, multiple-camera co-located, the re-projection point minimum with the distance of the tracking target pin point in video camera C1 is found, more
The current tracking results of new video camera C2, and perform step 8
8th, judge whether video camera C2 detects independent target;If video camera C2 does not detect independent target, perform
Step 7;If video camera C2 detects independent target, labell is assigned to the video camera C2 results currently tracked.
2. the target positioning of overlapping ken multiple-camera collaboration and tracking system according to claim 1, it is characterised in that step
Suddenly in (6), the sub-step for calculating candidate target is as follows:
(a) gps coordinate of the camera center in scene plane upright projection point is measured using gps receivergC1、gC2Referred to as image
Machine central projection point;
(b) by each video camera prospect, the rectangle frame both the above summit of pedestrian's foreground area is represented, passes through homography matrix
Project in scene plane, obtain
(c) above-mentioned subpoint and corresponding camera center subpoint are connected, obtains 4 straight lines in scene plane
(d) the intersection point p of above-mentioned straight line is calculated1、p2、p3、p4And using intersecting point coordinate as candidate's pin point coordinate;
Note l1, l2 are respectively two and singly answer projection line, and the end points of straight line is respectively (x0, y0), (x1, y1), (x2, y2), (x3, y3),
l1:Y=k1(x-x0)+y0, l2:Y=k2(x-x2)+y2, k1、k2It is the slope of two straight lines, two straight lines meet at a bit (x, y),Simultaneous
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>y</mi>
<mo>=</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>y</mi>
<mn>0</mn>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>y</mi>
<mo>=</mo>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
It can obtain
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>x</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>y</mi>
<mn>22</mn>
</msub>
<mo>-</mo>
<msub>
<mi>y</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>y</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<msub>
<mi>y</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
According to above method to video cameragC1、gC2In line find intersection and can obtain candidate target pin point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710806796.3A CN107580199A (en) | 2017-09-08 | 2017-09-08 | The target positioning of overlapping ken multiple-camera collaboration and tracking system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710806796.3A CN107580199A (en) | 2017-09-08 | 2017-09-08 | The target positioning of overlapping ken multiple-camera collaboration and tracking system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107580199A true CN107580199A (en) | 2018-01-12 |
Family
ID=61033057
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710806796.3A Pending CN107580199A (en) | 2017-09-08 | 2017-09-08 | The target positioning of overlapping ken multiple-camera collaboration and tracking system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107580199A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108491857A (en) * | 2018-02-11 | 2018-09-04 | 中国矿业大学 | A kind of multiple-camera target matching method of ken overlapping |
CN108921881A (en) * | 2018-06-28 | 2018-11-30 | 重庆邮电大学 | A kind of across camera method for tracking target based on homography constraint |
CN109167956A (en) * | 2018-05-21 | 2019-01-08 | 同济大学 | The full-bridge face traveling load spatial distribution merged based on dynamic weighing and more video informations monitors system |
CN109446942A (en) * | 2018-10-12 | 2019-03-08 | 北京旷视科技有限公司 | Method for tracking target, device and system |
CN109816700A (en) * | 2019-01-11 | 2019-05-28 | 佰路得信息技术(上海)有限公司 | A kind of information statistical method based on target identification |
CN110178167A (en) * | 2018-06-27 | 2019-08-27 | 潍坊学院 | Crossing video frequency identifying method violating the regulations based on video camera collaboration relay |
CN110443228A (en) * | 2019-08-20 | 2019-11-12 | 图谱未来(南京)人工智能研究院有限公司 | A kind of method for pedestrian matching, device, electronic equipment and storage medium |
CN111402286A (en) * | 2018-12-27 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Target tracking method, device and system and electronic equipment |
EP3620960A3 (en) * | 2018-09-06 | 2020-07-22 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and system for sensing an obstacle, computer device, and storage medium |
CN111599018A (en) * | 2019-02-21 | 2020-08-28 | 浙江宇视科技有限公司 | Target tracking method and system, electronic equipment and storage medium |
CN111984904A (en) * | 2020-07-15 | 2020-11-24 | 鹏城实验室 | Distributed cooperative monitoring method, monitoring platform and storage medium |
CN113436279A (en) * | 2021-07-23 | 2021-09-24 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment |
CN113711583A (en) * | 2019-04-25 | 2021-11-26 | 日本电信电话株式会社 | Object information processing device, object information processing method, and object information processing program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5999877A (en) * | 1996-05-15 | 1999-12-07 | Hitachi, Ltd. | Traffic flow monitor apparatus |
CN105894505A (en) * | 2016-03-30 | 2016-08-24 | 南京邮电大学 | Quick pedestrian positioning method based on multi-camera geometrical constraint |
-
2017
- 2017-09-08 CN CN201710806796.3A patent/CN107580199A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5999877A (en) * | 1996-05-15 | 1999-12-07 | Hitachi, Ltd. | Traffic flow monitor apparatus |
CN105894505A (en) * | 2016-03-30 | 2016-08-24 | 南京邮电大学 | Quick pedestrian positioning method based on multi-camera geometrical constraint |
Non-Patent Citations (1)
Title |
---|
张娇: ""基于重叠视域多摄像机协同的目标定位与跟踪技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108491857A (en) * | 2018-02-11 | 2018-09-04 | 中国矿业大学 | A kind of multiple-camera target matching method of ken overlapping |
CN108491857B (en) * | 2018-02-11 | 2022-08-09 | 中国矿业大学 | Multi-camera target matching method with overlapped vision fields |
CN109167956A (en) * | 2018-05-21 | 2019-01-08 | 同济大学 | The full-bridge face traveling load spatial distribution merged based on dynamic weighing and more video informations monitors system |
CN110178167B (en) * | 2018-06-27 | 2022-06-21 | 潍坊学院 | Intersection violation video identification method based on cooperative relay of cameras |
CN110178167A (en) * | 2018-06-27 | 2019-08-27 | 潍坊学院 | Crossing video frequency identifying method violating the regulations based on video camera collaboration relay |
CN108921881A (en) * | 2018-06-28 | 2018-11-30 | 重庆邮电大学 | A kind of across camera method for tracking target based on homography constraint |
EP3620960A3 (en) * | 2018-09-06 | 2020-07-22 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and system for sensing an obstacle, computer device, and storage medium |
US11042761B2 (en) | 2018-09-06 | 2021-06-22 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and system for sensing an obstacle, and storage medium |
CN109446942B (en) * | 2018-10-12 | 2020-10-16 | 北京旷视科技有限公司 | Target tracking method, device and system |
CN109446942A (en) * | 2018-10-12 | 2019-03-08 | 北京旷视科技有限公司 | Method for tracking target, device and system |
CN111402286B (en) * | 2018-12-27 | 2024-04-02 | 杭州海康威视系统技术有限公司 | Target tracking method, device and system and electronic equipment |
CN111402286A (en) * | 2018-12-27 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Target tracking method, device and system and electronic equipment |
CN109816700B (en) * | 2019-01-11 | 2023-02-24 | 佰路得信息技术(上海)有限公司 | Information statistical method based on target identification |
CN109816700A (en) * | 2019-01-11 | 2019-05-28 | 佰路得信息技术(上海)有限公司 | A kind of information statistical method based on target identification |
CN111599018A (en) * | 2019-02-21 | 2020-08-28 | 浙江宇视科技有限公司 | Target tracking method and system, electronic equipment and storage medium |
CN111599018B (en) * | 2019-02-21 | 2024-05-28 | 浙江宇视科技有限公司 | Target tracking method and system, electronic equipment and storage medium |
CN113711583A (en) * | 2019-04-25 | 2021-11-26 | 日本电信电话株式会社 | Object information processing device, object information processing method, and object information processing program |
CN110443228A (en) * | 2019-08-20 | 2019-11-12 | 图谱未来(南京)人工智能研究院有限公司 | A kind of method for pedestrian matching, device, electronic equipment and storage medium |
CN111984904A (en) * | 2020-07-15 | 2020-11-24 | 鹏城实验室 | Distributed cooperative monitoring method, monitoring platform and storage medium |
CN113436279A (en) * | 2021-07-23 | 2021-09-24 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107580199A (en) | The target positioning of overlapping ken multiple-camera collaboration and tracking system | |
CN104751486B (en) | A kind of moving target relay tracking algorithm of many ptz cameras | |
CN110175576B (en) | Driving vehicle visual detection method combining laser point cloud data | |
CN108875911A (en) | One kind is parked position detecting method | |
CN100390811C (en) | Method for tracking multiple human faces from video in real time | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
Shan et al. | Robust place recognition using an imaging lidar | |
CN105046206B (en) | Based on the pedestrian detection method and device for moving prior information in video | |
Breitenmoser et al. | A monocular vision-based system for 6D relative robot localization | |
GB2586766A (en) | Ship identity recognition method based on fusion of AIS data and video data | |
CN111126304A (en) | Augmented reality navigation method based on indoor natural scene image deep learning | |
CN102496016B (en) | Infrared target detection method based on space-time cooperation framework | |
CN109190508A (en) | A kind of multi-cam data fusion method based on space coordinates | |
CN108830213A (en) | Car plate detection and recognition methods and device based on deep learning | |
CN108197604A (en) | Fast face positioning and tracing method based on embedded device | |
CN111080679A (en) | Method for dynamically tracking and positioning indoor personnel in large-scale place | |
CN102930251B (en) | Bidimensional collectibles data acquisition and the apparatus and method of examination | |
CN101847206A (en) | Pedestrian traffic statistical method and system based on traffic monitoring facilities | |
CN102289822A (en) | Method for tracking moving target collaboratively by multiple cameras | |
CN113033315A (en) | Rare earth mining high-resolution image identification and positioning method | |
CN110378292A (en) | Three dimension location system and method | |
CN112465854A (en) | Unmanned aerial vehicle tracking method based on anchor-free detection algorithm | |
CN109344792A (en) | A kind of Motion parameters tracking | |
Liang et al. | Methods of moving target detection and behavior recognition in intelligent vision monitoring. | |
Lauziere et al. | A model-based road sign identification system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180112 |