CN106599776B - A kind of demographic method based on trajectory analysis - Google Patents
A kind of demographic method based on trajectory analysis Download PDFInfo
- Publication number
- CN106599776B CN106599776B CN201610938573.8A CN201610938573A CN106599776B CN 106599776 B CN106599776 B CN 106599776B CN 201610938573 A CN201610938573 A CN 201610938573A CN 106599776 B CN106599776 B CN 106599776B
- Authority
- CN
- China
- Prior art keywords
- track
- rectangle frame
- pixel
- point
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention is a kind of demographic method based on trajectory analysis, the depth map of scene is obtained by binocular camera or RGBD camera, camera is demarcated, three-dimensional point cloud is converted by depth map using calibrated camera parameter, three-dimensional point cloud is projected to X-Y plane, the top view of scene is obtained, number of people target in top view is locked using number of people locking means proposed by the present invention, track is judged by detection faces, the judgement that track is entered and gone out.The present invention is used to count in scene the pedestrians of (bus, staircase, passway etc.), the number for accurately being entered and being gone out.
Description
Technical field
The present invention relates to a kind of demographic methods based on trajectory analysis.
Background technique
Demographics are always the hot topic in monitoring system, and there is also many methods, for example, using laser beam
Method, if ray is blocked, someone passes through, in addition there are the method for using ultrasonic wave, by ultrasound measure away from
From, to detect pedestrian, still, these methods can not pick out the direction of pedestrian, so, number cannot be widely used in
Statistics.Carrying out statistics using number of the monocular camera to the pedestrian in scene has many technological difficulties, can under simple scenario
It, can not very accurate geo-statistic due to blocking the influence with pseudo- target to count number very well, and under sufficiently complex scene
The number of scene out, meanwhile, it can use under monocular camera there is no highly stable pedestrian's feature, variation of light etc. is all
More reasons, the accuracy for also resulting in demographics are relatively low;Binocular or RGB-D camera pair can be used in defect based on monocular
Number is counted, and under crowded environment, pedestrian has obviously feature, i.e. the head of pedestrian is had to than shoulder
Height is based on this feature, it is easy to lock the number of people, by matching stage, form track, there are certain mode in these tracks, lead to
The method for crossing machine learning extracts the mode of track, so that the behavior to pedestrian judges, finally to the number in channel into
Row statistics.
Summary of the invention
For above-mentioned problems of the prior art, the object of the present invention is to provide a kind of based on trajectory analysis
Demographic method realizes and is judged by the track 3D the behavior of pedestrian under binocular camera or RGB-D camera, Neng Goujing
True statistical number of person.
To achieve the goals above, the present invention adopts the following technical scheme:
A kind of demographic method based on trajectory analysis, comprising the following steps:
Step 1: setting up RGB-D camera in the scene of channel, demarcate to camera, calculate the parameter matrix of camera, leads to
Road includes the direction A and the direction B, and the two is contrary;
Step 2: being continuously shot the channel comprising human body target using camera, obtains N width depth map;It seeks every
The top view of width depth map;Background I is sought using all top views soughtb;
Step 3: the channel comprising human body target is shot using camera, obtains the depth map of a certain moment m;Needle
Its corresponding top view is obtained to the width depth map;Background operation is carried out for top view and obtains foreground picture, for prospect
Picture carries out piecemeal and operates to obtain the picture after piecemeal, carries out searching local maxima region for the picture after piecemeal and operates to obtain
Local maxima regional ensemble is extended the part after the operation of local maxima region is expanded for local maxima regional ensemble
Maximum region set is filtered rectangle frame processing for the local maxima regional ensemble after extension, obtains including multiple members
One rectangle frame set S of elementFm;
Step 4: if step 3 obtain be initial time rectangle frame set SFm, then the rectangle frame of initial time is utilized
Set SFmA plurality of track is generated, a plurality of track forms a track set Tm;If step 3 obtain be the non-initial moment square
Shape frame set SFm, then the rectangle frame set S at the non-initial moment is utilizedFmThe track set that previous moment has been formed is carried out
It updates, obtains updated track set T1m;
Step 5: if track set TmOr updated track set T1mIn certain track continuous several times not by more
Newly, then the track is marked, and by the track by track set TmOr updated track set T1mInterior deletion, obtains
New track set T2m;
Step 6: in the track set T2 that step 5 obtainsmIt is middle to select track as sample, form set TsmIf sample
Number reaches setting value, thens follow the steps seven, otherwise, executes step 3;
Step 7: it is directed to set TsmIn every track, extract the attribute of track, and record the corresponding label in every track
Value L multiple attributes corresponding with the track, the corresponding mark value L in every track multiple attributes corresponding with the track form one
Set Dl;
Step 8: upper detection faces δ of the channel on the direction A is arranged in the set obtained according to step 7uWith lower detection faces δd,
The upper detection faces δ of channel in directionbu' and lower detection faces δd′;
Step 9: repeating the process of step 3 to step 5, in the process, for due to continuous δdisSecondary do not update and
From track set TmEvery track of middle removal extracts the attribute of this track, this rail using method described in step 7
The attribute of mark includes the Y value F of the starting point of track9With the Y value F of the terminal of track10;
If F in the attribute of the track9-F10> 0, then it is the direction A, if meeting F9≥δdAnd F10≤δu, then the direction A number adds
1;If F9-F10< 0 is the direction B, if meeting F9≤δu' and F10≥δd', then the direction B number adds 1.
Step 10: repeating step 9, until camera stops shooting, obtain demographics result of the channel on the direction A and
The demographics result of channel in directionb.
Further, the top view for seeking every width depth map in step 2 and step 3, the formula of use are as follows:
Len=m*r
Wherein,For the outer ginseng matrix of camera, p11,p12,p13,p14,p21,p22,p23,p24,
p31,p32,p33,p34Element in corresponding Camera extrinsic matrix, θ are to pass through P (x on depth mapp,yp,zp) point correspondence ray with
The angle of ground level;G(xG,yG, 0) and it was the oblique line of P point and the intersection point of ground level;HCFor camera heights;M is P point in depth map
In depth value and 0 < m < D, wherein D be max pixel value set by user;R is in world space corresponding to unit depth value
Distance;
Top view I is obtained using following formula:
Wherein, (u, v) indicates that the pixel in the corresponding top view I of point P on depth map, I (u, v) indicate pixel
Pixel value at (u, v), (rx,ry) it is to point (xp,yp) zoom factor, (dx,dy) it is to point (xp,yp) translation coefficient;
For each of depth map point, the picture at the pixel and the pixel in the corresponding top view of point is obtained
Element value, all pixel values form top view I.
Further, it carries out background operation for top view in the step 3 and obtains foreground picture, the public affairs of use
Formula is as follows:
Wherein, δFTo be set by user for extracting the threshold value of prospect, IF(u, v) indicates foreground picture IFMiddle pixel (u,
V) pixel value at place, Ib(u, v) is Background IbPixel value at the position pixel (u, v), I (u, v) indicate top view I picture
Pixel value at vegetarian refreshments (u, v).
Further, it operates to obtain the picture after piecemeal for foreground picture progress piecemeal in the step 3, use
Formula it is as follows:
Wherein, IBTo carry out the picture after piecemeal, I for foreground pictureF(u, v) is foreground picture IFCoordinate is (u's, v)
Pixel value, IB(x, y) is picture IBPixel value at the position pixel (x, y), the size of the block of delimitation are wb×wb。
Further, carrying out finding local maxima region for the picture after piecemeal and operate to obtain office in the step 3
Portion's maximum region set, specifically includes the following steps:
For picture IBOn pixel (x, y), eight pixels around the pixel are searched, if the pixel
The corresponding pixel value of than eight pixels of corresponding pixel value will be big, which is put into local maxima regional ensemble SL
In, utilize SL (i)Indicate SLMember, and SL (i)=(ui,vi,di), (ui,vi) indicate the pixel, diFor pixel (ui,vi)
In picture IBIn pixel value.
Further, being extended local maxima region for local maxima regional ensemble and operate in the step 3
Local maxima regional ensemble after to extension, specifically includes the following steps:
For local maxima regional ensemble SLEach element SL (i), find SL (i)In foreground picture IFIn corresponding pixel
Position, the formula of use are as follows:
Wherein, (xi,yi) it is SL (i)Corresponding to foreground picture IFIn position;Enable SS (i)=(xi,yi,zi), gathered
SS, SS (i)For set SSElement;
For SSIn each member SS (i)=(xi,yi,zi), with SS (i)For seed, using se ed filling algorithm, to extending out
Exhibition, the condition of extension are as follows: if | IF(xi,yi)-zi|≤δE, then a rectangle frame S is usedE (i)=(ui,vi,Hi,Wi,zi) frame chooses
All pixels for meeting condition, wherein (ui,vi) it is rectangle frame upper left angle point, (Hi,Wi) be rectangle frame height and width, ziFor original
Beginning pixel value, δEFor defined threshold value, the set S of an extension rear region is formedE, SE (i)For set SEElement.
Further, being filtered at rectangle frame for the local maxima regional ensemble after extension in the step 3
Reason, obtain include multiple elements rectangle frame set, comprising the following steps:
Using two filter conditions to set SEIn element be filtered:
(1) if element SE (i)Meet the following conditions:Then the element is deleted, wherein δHFor minimum length threshold
Value, δWFor minimum widith threshold value;
(2) if two rectangle frame SE (i)=(ui,vi,Hi,Wi,zi) and SE (j)=(uj,vj,Hj,Wj,zj), meetThen determine SE (i)And SE (j)It is overlapped, if be overlapped, retains ziAnd zjIn biggish rectangle frame;
The rectangle frame retained is formed into rectangle frame set SFm, rectangle frame set SFmIn element be SFm (i), wherein m table
Show the moment.
Further, the step three in the step 4 obtain be initial time rectangle frame set, then using initial
The rectangle frame set at moment generates a plurality of track, and a plurality of track forms a track set;If step 3 obtains being non-initial
The rectangle frame set at moment, then using the non-initial moment rectangle frame set to the track that previous moment has been formed gather into
Row updates, and obtains updated track set;Specifically includes the following steps:
If the rectangle frame set S that step 3 obtainsFmIn m be equal to 1, then with rectangle frame set SFmIn each rectangle frame
SFm (i)For starting point, new track T is respectively createdm (i), i.e. Tm (i)={ SFm (i), track Tm (i)As track set TmIn a member
Element, i.e. Tm={ Tm (i)| i=1 ..., NTm, wherein NTmFor the rectangle frame set S at m momentFmThe number of the track of formation;
If the rectangle frame set S that step 3 obtainsFmM be not equal to 1, then by rectangle frame set SFmEach of element
SFm (i)With the rectangle frame set S at m-1 momentF(m-1)The track set T formedm-1={ Tm-1 (i)Each of track point
It is not matched, specific matching process is as follows:
Remember element SFm (i)Central point isWhereinThe respectively cross of central point
Coordinate and ordinate coordinate,Centered on the coordinate of short transverse put,For coordinate
The pixel value at place;Track Tm-1 (i)The last one rectangle frame central point be (xm-1,ym-1,IF(xm-1,ym-1));
IfWherein, δmatch
The max-thresholds of rectangle frame are matched for two, then rectangle frame SFm (i)With track Tm-1 (i)Matching, if track Tm-1 (i)Not with rectangle frame
Set SFmIn other rectangle frames matching, then by rectangle frame SFm (i)It is added to track Tm-1 (i)In, if track Tm-1 (i)With square
Shape frame set SFmIn another rectangle frame SFm (j)Match, SFm (j)Central point be
Then make the following judgment:
If
Then by rectangle frame SFm (j)By track Tm-1 (i)It removes, by rectangle frame SFm (i)It is added to track Tm-1 (i)In;If being unsatisfactory for
Above-mentioned condition, then rectangle frame SFm (j)It is retained in track Tm-1 (i)In.
In the above process, rectangle frame set SFmIn all elements SFm (i)After above-mentioned matching, if it exists not with appoint
It anticipates the rectangle frame of a path matching, then generates a new track, using the rectangle frame as first point of new track, and will
The track of generation is added to track set T1mIn.
Further, the attribute of the track in the step 7 includes that tracing point number scale is characterized variable F1, track is in the side Y
Upward span is denoted as characteristic variable F2, the span of track in z-direction is denoted as characteristic variable F3, the average number of people size of track
It is denoted as characteristic variable F4, the Euclidean distance between track and fit standard track is denoted as feature vector F5, the slope of track is denoted as spy
Levy variable F6, the mean breadth of ' locked ' zone is denoted as feature vector F on track7, the average length note of ' locked ' zone on track
For feature vector F8, the starting point Y value of track is denoted as feature vector F9, the terminal Y value of track is denoted as feature vector F10。
Further, the upper detection faces on the direction A in the step 8: δu=min ({ F10 (i)), { F10 (i)Indicate A
The attribute F of all tracks in direction10;
Lower detection faces on the direction A: δd=max ({ F9 (i)), { F9 (i)Indicate the direction A all tracks attribute F9;
Upper detection faces on the direction B: δu'=min ({ F9 (i)), { F9 (i)Indicate the direction B all tracks attribute F9;
Lower detection faces on the direction B: δd'=max ({ F10 (i)), { F10 (i)Indicate the direction B all tracks attribute
F10。
Compared with prior art, the present invention has following technical effect that this obtains field by binocular camera or RGBD camera
The depth map of scape, demarcates camera, three-dimensional point cloud is converted by depth map using calibrated camera parameter, by three-dimensional point
Cloud is projected to X-Y plane, obtains the top view of scene, using number of people locking means proposed by the present invention to the number of people in top view
Target is locked, and is judged by detection faces track, or by trained classifier (Adaboost, SVM,
Bayies etc.) judgement that track is entered and gone out.The present invention is for counting in scene (bus, staircase, passway
Deng) pedestrians, the number for accurately being entered and being gone out.
Detailed description of the invention
Fig. 1 is camera scheme of installation;
Fig. 2 is that world coordinate system establishes schematic diagram;
Explanation and illustration in further detail is done to the solution of the present invention with reference to the accompanying drawings and detailed description.
Specific embodiment
Demographic method based on trajectory analysis of the invention, comprising the following steps:
Step 1: setting up camera in the scene of channel, demarcate to camera, calculates the parameter matrix P of camera;Specific packet
Include following steps:
Step 1.1: choose scene of a certain channel as demographics, referring to Fig. 1, by camera be mounted on channel just on
Side, multiple human body targets are walked along the direction A or the direction B on a passage, and the direction A and B are contrary;
Step 1.2: establishing world coordinate system.Referring to fig. 2, camera is located on the Z axis of world coordinate system, along the direction in channel
It is the Y direction of world coordinate system, the direction perpendicular to channel is the X-direction of world coordinate system, and camera is in world coordinate system
Position coordinates be (0,0, H), wherein H is the distance of camera distance world coordinate system origin.
Step 1.3: camera is demarcated.During camera calibration, N (N >=6) group image coordinate and right therewith is selected
The world coordinates answered:
The parameter matrix P of camera is calculated using following formula:
Wherein,
Step 2: being continuously shot the channel comprising human body target using camera, obtains N (N >=50) width depth map;
Seek the top view of every width depth map;Background I is sought using top viewb。
Wherein, the top view of every width depth map is sought, comprising the following steps:
What the depth value in depth map represented is the point in world coordinate space, such as the distance of point such as point P to video camera
Len schemes the length of medium and small hypotenuse, we can obtain public as follows according to the geometrical relationship of object under world coordinate system
Formula:
Len=m*r (4)
Wherein, θ is the angle on depth map by the correspondence ray of P point and ground level;G(xG,yG, 0) and it was the oblique of P point
The intersection point of line and ground level;HCFor camera heights;M (0 < m < D) is depth value of the P point in depth map, and wherein D is set by user
Max pixel value;R is the distance in world space corresponding to unit depth value.
After obtaining the coordinate of P point, zooming and panning are carried out to P point, are located at the center of top view I, then:
Wherein, (u, v) indicates that the pixel in the corresponding top view I of point P, I (u, v) indicate the picture at pixel (u, v)
Element value, wherein (rx,ry) it is to (the x of point Pp,yp) zoom factor, (dx,dy) it is to (the x of point Pp,yp) translation coefficient.
For each of depth map point, the pixel of the pixel and the pixel in the corresponding top view of point is obtained
Value, all pixel values form top view I.N width top view I can get using the above method for N width depth mapi(i=
1,...N)。
Wherein, Background I is sought using top viewb, the formula of use is as follows;
Wherein, H is the length of top view, and W is the width of top view, Ib(x, y) is Background IbIn the position pixel (x, y)
The pixel value at place is set, Background I can be acquiredb。
Step 3: the channel comprising human body target is shot using camera, obtains the depth map at a certain moment;For
The width depth map obtains its corresponding top view;Background, block are carried out for top view, find local maxima region, extension
Local maxima region and filtering rectangle frame processing, obtain a rectangle frame set SFm;Specifically includes the following steps:
Step 3.1: channel being shot using camera, according to RGB-D camera, camera shooting can directly obtain a width
Depth map, i.e. a certain moment m (m=1,2 ...) depth map;It shoots, can be shot according to camera according to binocular camera
Left figure and right figure, by picture correction, Stereo matching, available frame depth map, i.e. a certain moment m (m=1,2 ...)
Depth map.
Step 3.2, its corresponding top view I is obtained for the width depth map takenm, used method and step
The method that top view is obtained in two is identical.
Step 3.3, the top view I obtained for step 3.2m, background, block are carried out, local maxima region is found, expands
Local maxima region and filtering rectangle frame processing are opened up, a rectangle frame set S is obtainedFm, concrete processing procedure is as follows:
It goes background: for top view I, foreground picture I being obtained using formula (8)F:
Wherein, δFTo be set by user for extracting the threshold value of prospect, IF(u, v) indicates foreground picture IFMiddle pixel (u,
V) pixel value at place.
Piecemeal operation: use size for wb×wbBlock to foreground picture IFBlock is carried out, picture I is obtainedB, the public affairs of use
Formula are as follows:
Wherein, IF(u, v) is foreground picture IFCoordinate is the pixel value of (u, v), IB(x, y) is picture IBPixel (x,
Y) pixel value at position.
It finds local maxima region: being directed to picture IBOn pixel (x, y), search eight pictures around the pixel
The pixel is put into office if the corresponding pixel value of than eight pixels of the corresponding pixel value of the pixel will be big by vegetarian refreshments
Portion maximum region set SLIn, using SL (i)Indicate SLElement, and SL (i)=(ui,vi,di), (ui,vi) indicate the pixel,
diFor pixel (ui,vi) in picture IBIn pixel value.
It extends local maxima region: being directed to local maxima regional ensemble SLEach element SL (i), find SL (i)In foreground picture
Piece IFIn corresponding location of pixels, the formula of use are as follows:
Wherein, (xi,yi) it is SL (i)Corresponding to foreground picture IFIn position.Enable SS (i)=(xi,yi,zi), (xi,yi) table
Show SL (i)Corresponding to foreground picture IFPixel, set S can be obtainedS, SS (i)For set SSElement.
For SSIn each member SS (i)=(xi,yi,zi), with SS (i)For seed, using se ed filling algorithm, to extending out
Exhibition, the condition of extension are as follows: if | IF(xi,yi)-zi|≤δE, δEFor defined threshold value=10, then a rectangle frame S is usedE (i)=
(ui,vi,Hi,Wi,zi) frame chooses all pixels for meeting condition, wherein (ui,vi) it is rectangle frame upper left angle point, (Hi,Wi) be
The height and width of rectangle frame, ziFor original pixel value (i.e. the spatial altitude of rectangle frame), the collection of an extension rear region is eventually formed
Close SE, SE (i)For set SEElement.
Filtering rectangle frame processing: it is expanded behind region, needs to filter overlapping region and improper region, use two mistakes
Filter condition, if 1. rectangle frame SE (i)Meet the following conditions:Then not retain;2. if two rectangle frame SE (i)=
(ui,vi,Hi,Wi,zi) and SE (j)=(uj,vj,Hj,Wj,zj), meetThen determine SE (i)And SE (j)
It is overlapped, if be overlapped, retains ziAnd zjBiggish rectangle frame.
The rectangle frame retained forms rectangle frame set SFm, rectangle frame set SFmIn element be SFm (i), so far complete mesh
Mark locking task.
Step 4: if step 3 obtain be initial time rectangle frame set, utilize initial time rectangle frame collection
At a plurality of track, a plurality of track forms a track set for symphysis;If step 3 obtain be the non-initial moment rectangle frame collection
It closes, then the track set that previous moment has been formed is updated using the rectangle frame set at the non-initial moment, is obtained more
Track set after new;Specifically includes the following steps:
If the rectangle frame set S that step 3 obtainsFmIn m be equal to 1, then be directed to rectangle frame set SFmIn each element
SFm (i), create a track Tm (i), and by SFm (i)As track Tm (i)First point, i.e. Tm (i)={ SFm (i), track Tm (i)
As track set TmIn an element, i.e. Tm={ Tm (i)| i=1 ..., NTm, wherein NTmFor the rectangle frame set at m moment
SFmThe number of the track of formation;
If the rectangle frame set S that step 3 obtainsFmM be not equal to 1, then by rectangle frame set SFmEach of element
SFm (i)With the rectangle frame set S at m-1 momentF(m-1)The track set T formedm-1={ Tm-1 (i)Each of track point
It is not matched, specific matching process is as follows:
Remember element SFm (i)Central point isWhereinThe respectively cross of central point
Coordinate and ordinate coordinate,Centered on the coordinate of short transverse put;Track Tm-1 (i)The last one rectangle frame
Central point be (xm-1,ym-1,IF(xm-1,ym-1));
If
Then rectangle frame SFm (i)With track Tm-1 (i)Matching, if track Tm-1 (i)Not with rectangle frame set SFmIn other rectangles
Frame matching, then by rectangle frame SFm (i)It is added to track Tm-1 (i)In, if track Tm-1 (i)With rectangle frame set SFmIn in addition
One rectangle frame SFm (j)Match, SFm (j)Central point be Then make the following judgment:
If
Then by rectangle frame SFm (j)By track Tm-1 (i)It removes, by rectangle frame SFm (i)It is added to track Tm-1 (i)In;If being unsatisfactory for
Above-mentioned condition, then rectangle frame SFm (j)It is retained in track Tm-1 (i)In.
In the above process, rectangle frame set SFmIn all elements SFm (i)After above-mentioned matching, if it exists not with appoint
It anticipates the rectangle frame of a path matching, then generates a new track, using the rectangle frame as first point of new track, and will
The track of generation is added to track set T1mIn.
Step 5: if track set TmOr updated track set T1mIn certain track continuous several times not by more
Newly, then the track is marked, and by the track by track set TmOr updated track set T1mInterior deletion, obtains
New track set T2m;
Step 6: the artificial observation step 5 track set T2 that treatedmIn each track, if observing certain rail
Mark determination is number of people track, then this track is labeled as positive sample, if observing, certain track determination is not number of people track (ratio
Such as: the track of manpower, people's shoulder or knapsack), then this track is labeled as negative sample, if certain track can not determine whether for people
Head rail mark, then do not mark.Set T will be put into labeled as the track of positive sample and negative samplesm={ < L, T2m (i)>|T2m (i)For
The track of label } in, L ∈ { -1,1 }, wherein -1 is negative sample, 1 is positive sample.If marker samples are enough, i.e. sample number M >
1000, step 7 is executed, step 3 is otherwise executed.
Step 7, for set TsmIn every track, extract the attribute of track, and record the corresponding label in every track
Value L attribute corresponding with the track.
Wherein the attribute of track includes:
Track points: the number for the rectangle frame that track includes is denoted as characteristic variable F1;
The span of track in the Y direction: the difference of the beginning and end Y value of track is denoted as characteristic variable F2;
The span of track in z-direction: the difference of the Z value of the Z-direction highest point and Z-direction minimum point of track is denoted as feature
Variable F3;
The average number of people size of track: the average value of region area is locked on track, i.e., on track in all rectangle frames
Area is denoted as characteristic variable F4;
Euclidean distance between track and fit standard track, calculation method are as follows:
The central point of each rectangle frame in fit standard track is calculated with averaging method first
Wherein,For the central point of j-th of rectangle frame on fit standard track, SF (i)(j)For set TsmIn j-th of rectangle frame on i-th track central point, NT (j)For institute's rail at j there are rectangle frame
The item number of mark.Fit standard track is represented by
By standard trajectoryIt projects on Y-Z plane, during realization, the track after projection is linearly inserted
Value, the standard trajectory after obtaining linear interpolationLinear interpolation is referred to for some in Y-axis without correspondingPoint, benefit
With the two sidesAverage value to the point carry out assignment.It will set TsmIn certain rail T(i)={ SF (1),SF (2),…SF (j)…,SF (N(i))Project on Y-Z plane, wherein SF (j)=(uj,vj,Hj,Wj,dj) indicate track T(i)On j-th of rectangle
The central point of frame, N (i) are track T(i)On rectangle frame number.Track T(i)For set TsmIn i-th track.
Calculate track T(i)With the Euclidean distance between fit standard track:
Euclidean distance is as feature vector F5。
Current track: being fitted in alignment by the slope of track by the method for least square fitting, and it is straight to calculate this
The slope of line is denoted as characteristic variable F6。
The mean breadth of ' locked ' zone on track: the mean breadth for calculating the ' locked ' zone on track is denoted as feature vector
F7, i.e., the mean breadth of all rectangle frames.
The average length of ' locked ' zone on track: the average length for calculating the ' locked ' zone on track is denoted as feature vector
F8, i.e., the average length of all rectangle frames.
The starting point Y value of track is denoted as feature vector F9, the terminal Y value of track is denoted as feature vector F10。
The corresponding mark value L of recording track attribute corresponding with the track: < L, F1,F2,F3,F4,F5,F6,F7,F8,F9,F10
>。
The corresponding mark value L in every track attribute corresponding with the track, forms a set Dl={ Dl (i), i=1 ...
Nl}{<L(i),F1 (i),F2 (i),F3 (i),F4 (i),F5 (i),F6 (i),F7 (i),F8 (i),F9 (i),F10 (i)> | i=1 ... Nl, wherein Dl (i)Table
Show set DlIn i-th of element, NlFor set DlElement number.
Step 8: the set obtained according to step 7 is arranged upper detection faces and lower detection faces of the channel on the direction A, leads to
The upper detection faces and lower detection faces of road in directionb.Specific setting method is as follows:
For set TsmIn certain track, if its corresponding attribute F9-F10> 0, then this track is the direction A;If its is right
The attribute F answered9-F10< 0, then this track is the direction B;
Upper detection faces on the direction A: δu=min ({ F10 (i)), { F10 (i)Indicate the direction A all tracks attribute F10;
Lower detection faces on the direction A: δd=max ({ F9 (i)), { F9 (i)Indicate the direction A all tracks category
Property F9;
Upper detection faces on the direction B: δu'=min ({ F9 (i)), { F9 (i)Indicate the direction B all tracks attribute F9;
Lower detection faces on the direction B: δd'=max ({ F10 (i)), { F10 (i)Indicate the direction B all tracks attribute
F10;
Step 9: repeating the process of step 3 to step 5, in the process, for due to continuous δdisSecondary do not update and
From track set T1mEvery track of middle removal, the attribute of this track is extracted using method described in step 7, if the rail
F in the attribute of mark9-F10> 0, then it is the direction A, if meeting F9≥δdAnd F10≤δu, then the direction A number adds 1;If F9-F10< 0
For the direction B, if meeting F9≤δu' and F10≥δd', then the direction B number adds 1.
Step 10: repeating step 9, until camera stops shooting, obtain demographics result of the channel on the direction A and
The demographics result of channel in directionb.
Claims (10)
1. a kind of demographic method based on trajectory analysis, which comprises the following steps:
Step 1: setting up RGB-D camera in the scene of channel, demarcate to camera, calculates the parameter matrix of camera, channel packet
The direction A and the direction B are included, the two is contrary;
Step 2: being continuously shot the channel comprising human body target using camera, obtains N width depth map;It is deep to seek every width
Spend the top view of figure;Background I is sought using all top views soughtb;
Step 3: the channel comprising human body target is shot using camera, obtains the depth map of a certain moment m;For this
Width depth map obtains its corresponding top view;Background operation is carried out for top view and obtains foreground picture, for foreground picture
It carries out piecemeal to operate to obtain the picture after piecemeal, carries out searching local maxima region for the picture after piecemeal and operate to obtain part
Maximum region set is extended the local maxima after the operation of local maxima region is expanded for local maxima regional ensemble
Regional ensemble is filtered rectangle frame processing for the local maxima regional ensemble after extension, obtains including multiple elements
One rectangle frame set SFm;
Step 4: if step 3 obtain be initial time rectangle frame set SFm, then the rectangle frame set of initial time is utilized
SFmA plurality of track is generated, a plurality of track forms a track set Tm;If step 3 obtain be the non-initial moment rectangle frame
Set SFm, then the rectangle frame set S at the non-initial moment is utilizedFmThe track set that previous moment has been formed is updated,
Obtain updated track set T1m;
Step 5: if track set TmOr updated track set T1mIn certain track continuous several times be not updated, then
The track is marked, and by the track by track set TmOr updated track set T1mInterior deletion obtains new
Track set T2m;
Step 6: in the track set T2 that step 5 obtainsmIt is middle to select track as sample, form set TsmIf sample number reaches
To setting value, seven are thened follow the steps, otherwise, executes step 3;
Step 7: it is directed to set TsmIn every track, extract the attribute of track, and record the corresponding mark value L in every track
Multiple attributes corresponding with the track, the corresponding mark value L in every track multiple attributes corresponding with the track form a collection
Close Dl;
Step 8: upper detection faces δ of the channel on the direction A is arranged in the set obtained according to step 7uWith lower detection faces δd, channel
Upper detection faces δ in directionbu' and lower detection faces δd′;
Step 9: repeating the process of step 3 to step 5, in the process, for due to continuous δdisSecondary do not update and from rail
Trace set TmEvery track of middle removal extracts the attribute of this track using method described in step 7, this track
Attribute includes the Y value F of the starting point of track9With the Y value F of the terminal of track10;
If F in the attribute of the track9-F10> 0, then it is the direction A, if meeting F9≥δdAnd F10≤δu, then the direction A number adds 1;If
F9-F10< 0 is then the direction B, if meeting F9≤δu' and F10≥δd', then the direction B number adds 1;
Step 10: repeating step 9, until camera stops shooting, obtains demographics result and channel of the channel on the direction A
Demographics result in directionb.
2. as described in claim 1 based on the demographic method of trajectory analysis, which is characterized in that in step 2 and step 3
The top view for seeking every width depth map, the formula of use is as follows:
Len=m*r
Wherein,For the outer ginseng matrix of camera, p11,p12,p13,p14,p21,p22,p23,p24,p31,
p32,p33,p34Element in corresponding Camera extrinsic matrix, θ are to pass through P (x on depth mapp,yp,zp) point correspondence ray and Horizon
The angle in face;G(xG,yG, 0) and it was the oblique line of P point and the intersection point of ground level;HCFor camera heights;M is P point in depth map
Depth value and 0 < m < D, wherein D is max pixel value set by user;R is in world space corresponding to unit depth value
Distance;
Top view I is obtained using following formula:
Wherein, (u, v) indicates that the pixel in the corresponding top view I of point P on depth map, I (u, v) indicate top view I pixel
Pixel value at point (u, v), (rx,ry) it is to point (xp,yp) zoom factor, (dx,dy) it is to point (xp,yp) translation coefficient;
For each of depth map point, the pixel at the pixel and the pixel in the corresponding top view of point is obtained
Value, all pixels form top view I.
3. as claimed in claim 2 based on the demographic method of trajectory analysis, which is characterized in that the needle in the step 3
Background operation is carried out to top view and obtains foreground picture, the formula of use is as follows:
Wherein, δFTo be set by user for extracting the threshold value of prospect, IF(u, v) indicates foreground picture IFAt middle pixel (u, v)
Pixel value, Ib(u, v) is Background IbPixel value at the position pixel (u, v), I (u, v) indicate top view I pixel
Pixel value at (u, v).
4. as claimed in claim 3 based on the demographic method of trajectory analysis, which is characterized in that the needle in the step 3
Picture after piecemeal operates to obtain piecemeal is carried out to foreground picture, the formula of use is as follows:
Wherein, IBTo carry out the picture after piecemeal, I for foreground pictureF(u, v) is foreground picture IFCoordinate is the pixel of (u, v)
Value, IB(x, y) is picture IBPixel value at the position pixel (x, y), the size of the block of delimitation are wb×wb。
5. as claimed in claim 4 based on the demographic method of trajectory analysis, which is characterized in that the needle in the step 3
Searching local maxima region is carried out to the picture after piecemeal to operate to obtain local maxima regional ensemble, specifically includes the following steps:
For picture IBOn pixel (x, y), eight pixels around the pixel are searched, if the pixel is corresponding
The corresponding pixel value of than eight pixels of pixel value will be big, which is put into local maxima regional ensemble SLIn, benefit
Use SL (i)Indicate SLMember, and SL (i)=(ui,vi,di), (ui,vi) indicate the pixel, diFor pixel (ui,vi) in picture
IBIn pixel value.
6. as claimed in claim 5 based on the demographic method of trajectory analysis, which is characterized in that the needle in the step 3
Local maxima regional ensemble after the operation of local maxima region is expanded is extended to local maximum region set, it is specific to wrap
Include following steps:
For local maxima regional ensemble SLEach element SL (i), find SL (i)In foreground picture IFIn corresponding location of pixels,
The formula of use are as follows:
Wherein, (xi,yi) it is SL (i)Corresponding to foreground picture IFIn position;Enable SS (i)=(xi,yi,zi), obtain set SS, SS (i)
For set SSElement;
For SSIn each member SS (i)=(xi,yi,zi), with SS (i)It is extended to the outside, is expanded using se ed filling algorithm for seed
The condition of exhibition are as follows: if | IF(xi,yi)-zi|≤δE, wherein IF(xi,yi) indicate foreground picture IFMiddle coordinate is (xi,yi) pixel
Value then uses a rectangle frame SE (i)=(ui,vi,Hi,Wi,zi) frame chooses all pixels for meeting condition, wherein (ui,vi)
For rectangle frame upper left angle point, (Hi,Wi) be rectangle frame height and width, ziFor the spatial altitude of rectangle frame, δEFor defined threshold value,
Form the set S of an extension rear regionE, SE (i)For set SEElement.
7. as claimed in claim 6 based on the demographic method of trajectory analysis, which is characterized in that the needle in the step 3
To the local maxima regional ensemble after extension be filtered rectangle frame processing, obtain include multiple elements rectangle frame set,
The following steps are included:
Using two filter conditions to set SEIn element be filtered:
(1) if element SE (i)Meet the following conditions:Then the element is deleted, wherein δHFor minimum constructive height threshold value, δW
For minimum widith threshold value;
(2) if two rectangle frame SE (i)=(ui,vi,Hi,Wi,zi) and SE (j)=(uj,vj,Hj,Wj,zj), meetThen determine SE (i)And SE (j)It is overlapped, if be overlapped, retains ziAnd zjIn biggish rectangle frame;
The rectangle frame retained is formed into rectangle frame set SFm, rectangle frame set SFmIn element be SFm (i), wherein when m is indicated
It carves.
8. as described in claim 1 based on the demographic method of trajectory analysis, which is characterized in that the step in the step 4
Rapid three obtain be initial time rectangle frame set, then generate a plurality of track using the rectangle frame set of initial time, it is a plurality of
Track forms a track set;If step 3 obtain be the non-initial moment rectangle frame set, using this it is non-initial when
The rectangle frame set at quarter is updated the track set that previous moment has been formed, and obtains updated track set;Specifically
The following steps are included:
If the rectangle frame set S that step 3 obtainsFmIn m be equal to 1, then with rectangle frame set SFmIn each rectangle frame SFm (i)
For starting point, new track T is respectively createdm (i), i.e. Tm (i)={ SFm (i), track Tm (i)As track set TmIn an element, i.e.,
Tm={ Tm (i)| i=1 ..., NTm, wherein NTmFor the rectangle frame set S at m momentFmThe number of the track of formation;
If the rectangle frame set S that step 3 obtainsFmM be not equal to 1, then by rectangle frame set SFmEach of element SFm (i)
With the rectangle frame set S at m-1 momentF(m-1)The track set T formedm-1={ Tm-1 (i)Each of track respectively into
Row matching, specific matching process are as follows:
Remember element SFm (i)Central point isWhereinThe respectively abscissa of central point
And ordinate,For coordinateThe pixel value at place;Track Tm-1 (i)The last one rectangle frame center
Point is (xm-1,ym-1,IF(xm-1,ym-1));
IfWherein, δmatchIt is two
The max-thresholds of a matching rectangle frame, then rectangle frame SFm (i)With track Tm-1 (i)Matching, if track Tm-1 (i)Not with rectangle frame set
SFmIn other rectangle frames matching, then by rectangle frame SFm (i)It is added to track Tm-1 (i)In, if track Tm-1 (i)With rectangle frame
Set SFmIn another rectangle frame SFm (j)Match, SFm (j)Central point beThen into
The following judgement of row:
If
Then by rectangle frame SFm (j)By track Tm-1 (i)It removes, by rectangle frame SFm (i)It is added to track Tm-1 (i)In;If being unsatisfactory for above-mentioned
Condition, then rectangle frame SFm (j)It is retained in track Tm-1 (i)In;
In the above process, rectangle frame set SFmIn all elements SFm (i)After above-mentioned matching, if it exists not with it is any one
The rectangle frame of path matching then generates a new track, using the rectangle frame as first point of new track, and will generate
Track be added to track set T1mIn.
9. as described in claim 1 based on the demographic method of trajectory analysis, which is characterized in that the rail in the step 7
The attribute of mark includes that tracing point number scale is characterized variable F1, the span of track in the Y direction is denoted as characteristic variable F2, track is in the side Z
Upward span is denoted as characteristic variable F3, the average number of people size of track is denoted as characteristic variable F4, track and fit standard track it
Between Euclidean distance be denoted as feature vector F5, the slope of track is denoted as characteristic variable F6, the mean breadth note of ' locked ' zone on track
To be denoted as feature vector F7, the average length of ' locked ' zone is denoted as feature vector F on track8, the starting point Y value of track is denoted as feature
Vector F9, the terminal Y value of track is denoted as feature vector F10。
10. as claimed in claim 9 based on the demographic method of trajectory analysis, which is characterized in that the A in the step 8
Upper detection faces on direction: δu=min ({ F10 (i)), { F10 (i)Indicate the direction A all tracks attribute F10;
Lower detection faces on the direction A: δd=max ({ F9 (i)), { F9 (i)Indicate the direction A all tracks category
Property F9;
Upper detection faces on the direction B: δu'=min ({ F9 (i)), { F9 (i)Indicate the direction B all tracks attribute F9;
Lower detection faces on the direction B: δd'=max ({ F10 (i)), { F10 (i)Indicate the direction B all tracks attribute F10。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610938573.8A CN106599776B (en) | 2016-10-25 | 2016-10-25 | A kind of demographic method based on trajectory analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610938573.8A CN106599776B (en) | 2016-10-25 | 2016-10-25 | A kind of demographic method based on trajectory analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106599776A CN106599776A (en) | 2017-04-26 |
CN106599776B true CN106599776B (en) | 2019-06-28 |
Family
ID=58589685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610938573.8A Active CN106599776B (en) | 2016-10-25 | 2016-10-25 | A kind of demographic method based on trajectory analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106599776B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490030B (en) * | 2018-05-15 | 2023-07-14 | 保定市天河电子技术有限公司 | Method and system for counting number of people in channel based on radar |
RU2696548C1 (en) | 2018-08-29 | 2019-08-02 | Александр Владимирович Абрамов | Method of constructing a video surveillance system for searching and tracking objects |
CN110223404A (en) * | 2019-04-15 | 2019-09-10 | 杭州丰锐智能电气研究院有限公司 | A kind of time counting method and its embedded device of embedded device |
JP7409163B2 (en) | 2020-03-06 | 2024-01-09 | 株式会社豊田中央研究所 | Stationary sensor calibration device and stationary sensor calibration method |
CN113570755A (en) * | 2021-07-20 | 2021-10-29 | 菲特(天津)检测技术有限公司 | System, method, medium and application for monitoring and alarming personnel entering and exiting production line workshop |
CN114511592B (en) * | 2022-01-21 | 2024-07-05 | 海纳云物联科技有限公司 | Personnel track tracking method and system based on RGBD camera and BIM system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103871082A (en) * | 2014-03-31 | 2014-06-18 | 百年金海科技有限公司 | Method for counting people stream based on security and protection video image |
CN104217208A (en) * | 2013-06-03 | 2014-12-17 | 株式会社理光 | Target detection method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105654021B (en) * | 2014-11-12 | 2019-02-01 | 株式会社理光 | Method and apparatus of the detection crowd to target position attention rate |
-
2016
- 2016-10-25 CN CN201610938573.8A patent/CN106599776B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104217208A (en) * | 2013-06-03 | 2014-12-17 | 株式会社理光 | Target detection method and device |
CN103871082A (en) * | 2014-03-31 | 2014-06-18 | 百年金海科技有限公司 | Method for counting people stream based on security and protection video image |
Non-Patent Citations (2)
Title |
---|
Vision-Based Obstacle Avoidance in Sidewalk Environment Using Top-View Transform and Optical-Flow;Qing Lin等;《Journal of Measurement Science and Instrumentation》;20111231;第2卷(第4期);全文 |
一种快速的俯视行人检测方法;唐春晖等;《系统仿真学报》;20120930;第24卷(第9期);全文 |
Also Published As
Publication number | Publication date |
---|---|
CN106599776A (en) | 2017-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106600643B (en) | A kind of demographic method based on trajectory analysis | |
CN106599776B (en) | A kind of demographic method based on trajectory analysis | |
JP6549797B2 (en) | Method and system for identifying head of passerby | |
WO2020052530A1 (en) | Image processing method and device and related apparatus | |
CN109934848B (en) | Method for accurately positioning moving object based on deep learning | |
CN103077386B (en) | A kind of video flowing iris image quality detection method of cascade | |
CN106570883B (en) | A kind of demographic method based on RGB-D camera | |
CN104517095B (en) | A kind of number of people dividing method based on depth image | |
CN109583373B (en) | Pedestrian re-identification implementation method | |
CN104700408B (en) | A kind of indoor single goal localization method based on camera network | |
CN103824070A (en) | Rapid pedestrian detection method based on computer vision | |
JPH1166319A (en) | Method and device for detecting traveling object, method and device for recognizing traveling object, and method and device for detecting person | |
CN107615334A (en) | Object detector and object identification system | |
CN106709938B (en) | Based on the multi-target tracking method for improving TLD | |
CN106650701A (en) | Binocular vision-based method and apparatus for detecting barrier in indoor shadow environment | |
CN105022999A (en) | Man code company real-time acquisition system | |
CN106530310A (en) | Pedestrian counting method and device based on human head top recognition | |
KR20160109761A (en) | Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing | |
WO2023060632A1 (en) | Street view ground object multi-dimensional extraction method and system based on point cloud data | |
CN106446785A (en) | Passable road detection method based on binocular vision | |
CN102609724A (en) | Method for prompting ambient environment information by using two cameras | |
CN104182968A (en) | Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system | |
CN106570449A (en) | Visitor flow rate and popularity index detection method based on area definition and detection system thereof | |
CN109791607A (en) | It is detected from a series of images of video camera by homography matrix and identifying object | |
CN107038714A (en) | Many types of visual sensing synergistic target tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |