A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals
Technical field
The invention belongs to intelligent transportation fields, and in particular to a kind of traffic flow statistics side towards Optimization Control for Urban Traffic Signals
Method.
Background technique
Vehicle fleet size in estimation traffic video sequence is a vital task in intelligent transportation system, can be traffic
It manages and controls and reliable information is provided.In traditional intelligent transportation system, vehicle count is complete by special sensor
At, such as magnet ring, microwave or ultrasonic detector.However these sensors have some limitations, such as acquisition data excessively
Simple and installation cost is high.With the development of image processing techniques, compared to traditional sensor, method, the vehicle based on video
Method of counting starts to be paid close attention to and paid attention to by people.
Vehicle count method using machine vision includes: detection, tracking and trajectory processing.Method of counting existing at present
It can be mainly divided into three classes: the method based on recurrence, the method based on cluster and (matching) method based on detection.Wherein, base
It is intended to learn regression function using the feature of detection zone in the method for recurrence, the method based on cluster tracks clarification of objective
Track is obtained, and cluster is carried out to object count to track.And in method of counting mentioned above, have some common
The problem of: video angle is restricted, there is track of vehicle complexity uncertainty can not handle complex scene etc..
Summary of the invention
It is restricted for video angle existing in the prior art, calculating speed is slow and can not handle asking for complex scene
Topic, the traffic flow statistics method towards Optimization Control for Urban Traffic Signals that the present invention provides a kind of include the following steps:
Step 1: acquiring the video of traffic scene, obtain video interception, classification annotation is carried out to video interception, after mark
Video interception as sample set;
Step 2: the sample set that step 1 obtains being trained using YOLO V3 algorithm, detection model is obtained, by traffic
The video input detection model of scene obtains the testing result information of the Pixel Information and target of image in each frame, wherein view
The t frame of frequency is expressed as Framet, t expression frame number value is positive integer;
Step 3: creating interim trajectory lists Ts, Ts is sky at this time, the video for the traffic scene that read step 2 obtains
Frame1As present frame, to Frame1In each target for detecting establish new track, and Ts are added in all new tracks, more
New Frame2As present frame, by Frame1In each target testing result information as present frame Frame2Corresponding rail
Mark endpoint information, enters step 4;
Step 4: setting present frame as Framet, then next frame is Framet+1, by FrametIn every final on trajectory information with
FrametThe testing result information of target matched: by FrametThe testing result information conduct of the target of middle successful match
Framet+1In corresponding final on trajectory information, continue track;By FrametThe inspection of the middle object detection results target that it fails to match
Starting point of the result information as new track is surveyed, new track is created and is added in Ts, at this time FrametIn the starting point of new track be
Framet+1Final on trajectory information;By FrametThe target exploitation KCF algorithm of middle final on trajectory information matches failure obtains
FrametMiddle target is corresponding in Framet+1The predicted position information of middle corresponding target continues track, and by track confidence level
Timer+1;Work as FrametWhen not being the last frame of video, Frame is updatedt+1Step 4 is executed as present frame, is otherwise executed
Step 5;
Step 5: the track in Ts being screened, complete trajectory list TA is obtained, sets crossing number and to every in TA
The terminus of track is clustered, and cluster centre point set and road center point are obtained;
Step 6: the cluster centre point set and road center point obtained according to step 5 carries out subregion to crossing, then counts
It calculates the angle of every track and every track is encoded according to the subregion at crossing, obtain the complete trajectory list for having directional information
TB carries out counting statistics to TB;
Step 7: the counting statistics obtain to step 6 using Webster timing method as a result, carry out calculating total cycle time
And respectively to signal timing, to obtain the telecommunication flow information of traffic scene video.
Further, step 1 includes following sub-step:
Step 1.1: acquire the video of traffic scene, obtain 5000 comprising bus, truck, car, motorcycle, from
The video interception of the sample image of the targets such as driving, pedestrian;
Step 1.2: video interception being marked using image labeling tool, the mark includes carrying out to the target in image
Target position in target category and image is labeled, and the video interception after mark is as sample set.
Further, step 2 includes following sub-step:
The sample set that step 1 obtains is trained using YOLOV3 algorithm, obtains detection model, by the view of traffic scene
Frequency input detection model, obtains the testing result information of the Pixel Information and target of image in each frame, wherein the t of video
Frame is expressed as Framet, t expression frame number value is positive integer, ItIndicate the Pixel Information of the image of t frame, the ItIncluding picture
Width, height and area and Pixel Information, DBtIndicate the testing result of t frame, and DBt={ BBi, i=1,2 ..., n },
Wherein BBiIt indicates that t frame detects i-th of target information, obtains the testing result information of target in each frame, the inspection of the target
Surveying result information includes the midpoint coordinates of target detection envelope frame, width, height, the area of target detection envelope frame.
Further, matched process in step 4 are as follows: calculate every track TiEndpoint information BlastWith it is right in present frame
Answer the testing result information BB of targetiDuplication Overlap, Duplication BlastAnd BBiCorresponding two rectangle frames overlapping
Then the ratio of the area in region and total occupied area calculates the pixel distance Dis of the central point of two boundary rectangle frames, finally
B is calculated by the weighted results of Overlap and DislastAnd BBiIt is considered as the matching degree MatchValue of the same target, if
Matching degree is more than or equal to threshold value then successful match, and otherwise it fails to match, and the value range of the MatchValue is [0,1].
Further, MatchValue described in step 4 is set as 0.7.
Further, final on trajectory information matches fail in step 4, obtain Frame using KCF algorithmtMiddle target is corresponding
In Framet+1The predicted position information of middle corresponding target includes following two situation:
If obtaining Framet+1The predicted position information update of existing target is then by the predicted position information of middle target
Framet+1Middle final on trajectory information continues track, and by Timer+1;
If not obtaining Framet+1The predicted position information of middle target, then replicate FrametFinal on trajectory information conduct
Framet+1Final on trajectory information, continue track, and by Timer+1.
Further, step 5 specifically includes following sub-step:
Step 5.1: the track in Ts being screened, the screening conditions are as follows: as the Timer > 30 or rail of selected track
When the midpoint coordinates of the target detection envelope frame of mark endpoint information is located at video boundaries, by selected track from interim trajectory lists Ts
Middle deletion, and selected track is saved in complete trajectory list TA, obtain complete trajectory list TA;
Step 5.2: setting crossing number and calculated to cluster number k, and by the terminus input K-means of every track in TA
Method is clustered, and cluster centre point set PA={ P is exportedw, w=1 .., k }, PwIt is w-th of cluster centre point, takes cluster centre
The central point of set PA is road center point PCent.
Further, step 6 includes following sub-step:
Step 6.1: the cluster centre point set PA and road center point PCent obtained according to step 5 establishes polar coordinates
System, using PCent as the pole of polar coordinate system and PCent=(x1, y1), a ray is drawn as polar axis using direction horizontally to the right,
The positive direction counterclockwise for angle is taken, polar angle coordinate θ unit is degree, and range is (0,360), if another point P in polar coordinates
=(x2, y2) it is calculated by the following formula the θ value of P:
As x2 > x1 and y2 > y1, θ=360-180/pi*arctan ((y2-y1)/(x2-x1));
As x2=x1 and y2 > y1, θ=270;
As x2 < x1 and y2 > y1, θ=180-180/pi*arctan ((y2-y1)/(x2-x1));
As x2 < x1 and y2=y1, θ=180;
As x2 < x1 and y2 < y1, θ=180-180/pi*arctan ((y2-y1)/(x2-x1));
As x2=x1 and y2 < y1, θ=90;
As x2 > x1 and y2 < y1, θ=- 180/pi*arctan ((y2-y1)/(x2-x1));
As x2 > x1 and y2=y1, θ=0;
Step 6.2: taking P ∈ PA, using the formula in step 6.1, obtain the θ value of each cluster centre point in PA, pass through
Completion is ranked up to the θ value of each cluster centre point, subregion is carried out to crossing;
Step 6.3: taking the terminus information of the every track P ∈, the angle of every track is calculated using the formula in step 6.1
Spend and simultaneously every track encode according to the subregion at crossing, obtaining has the complete trajectory list TB of directional information, to TB according to left-hand rotation,
It turns right and three sides of straight trip carries out counting statistics.
Further, the traffic scene in step 6.2 is crossroad, and cluster centre point number k=4 calculates four
A cluster centre point corresponding angle θ1、θ2、θ3、θ4, to θ1、θ2、θ3、θ4It is ranked up from small to large: 0 <=θ1< θ2< θ3< θ4
Then <=360 calculate Wherein θ1′、θ2′、θ3′、
θ4' it is the primary parameter of subregion to be done to current scene environment, and be ranked up from small to large to θ ': 0 <=θ '1< θ '2< θ '3
< θ '4<=360, by (θ '1, θ '2) it is divided into the area A, (θ '2, θ3) it is divided into the area B, (θ '3, θ '4) it is divided into the area C, (θ '4, 360)
And (0, θ '1) it is divided into the area D, it completes to carry out subregion to crossing.
The present invention can bring it is following the utility model has the advantages that
The present invention has the richness of better precision and data, provides richer traffic parameter information, such as detects vehicle
Type, density, speed and traffic accident, and cost of implementation is low, and installation and maintenance are simple, and the present invention can be used for the pre- of accident
Alert, prevention congestion and automatic path planning, the situation in particular for vehicle flowrate compared with large scene complexity, method proposed by the present invention
Still there is preferable effect.Meanwhile the telecommunication flow information by obtaining crossroad different periods can also carry out signal timing dial,
It brings significant economic benefit and can be improved traffic traffic efficiency.
Detailed description of the invention
Fig. 1 is the regional code sample image of traffic scene;
Fig. 2 is traffic scene sample image;
Fig. 3 is that sample marks example image;
Fig. 4 is that deep learning training process loses curve image;
Fig. 5 is deep learning detection result image.
Fig. 6 is that target detection tracking result track shows image;
Fig. 7 (a) is the region division sample instantiation figure of crossroad;
Fig. 7 (b) is the region division sample instantiation figure in T-shaped road junction;
Fig. 8 is actual traffic flow field scape illustraton of model;
Fig. 9 is actual traffic scenario parameters input figure;
Figure 10 is the timing scheme for not distinguishing crossing wagon flow driving direction;
Figure 11 is the timing scheme for distinguishing crossing wagon flow driving direction;
Figure 12 is the timing scheme evaluation result 1 for not distinguishing crossing wagon flow driving direction;
Figure 13 is the timing scheme evaluation result 2 for not distinguishing crossing wagon flow driving direction;
Figure 14 is the timing scheme evaluation result 1 for distinguishing crossing wagon flow driving direction;
Figure 15 is the timing scheme evaluation result 2 for distinguishing crossing wagon flow driving direction.
Specific embodiment
The following provides a specific embodiment of the present invention, it should be noted that the invention is not limited to implement in detail below
Example, all equivalent transformations made on the basis of the technical solutions of the present application each fall within protection scope of the present invention.
A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals, includes the following steps:
Step 1: acquiring the video of traffic scene, obtain video interception, classification annotation is carried out to video interception, after mark
Video interception as sample set;
Step 2: the sample set that step 1 obtains being trained using YOLO V3 algorithm, detection model is obtained, by traffic
The video input detection model of scene obtains the testing result information of the Pixel Information and target of image in each frame, wherein view
The t frame of frequency is expressed as Framet, t expression frame number value is positive integer;
Step 3: creating interim trajectory lists Ts, Ts is sky at this time, the video for the traffic scene that read step 2 obtains
Frame1As present frame, to Frame1In each target for detecting establish new track, and Ts are added in all new tracks, more
New Frame2As present frame, by Frame1In each target testing result information as present frame Frame2Corresponding rail
Mark endpoint information, enters step 4;
Step 4: setting present frame as Framet, then next frame is Framet+1, by FrametIn every final on trajectory information with
FrametThe testing result information of target matched: by FrametThe testing result information conduct of the target of middle successful match
Framet+1In corresponding final on trajectory information, continue track;By FrametThe inspection of the middle object detection results target that it fails to match
Starting point of the result information as new track is surveyed, new track is created and is added in Ts, at this time FrametIn the starting point of new track be
Framet+1Final on trajectory information;By FrametThe target exploitation KCF algorithm of middle final on trajectory information matches failure obtains
FrametMiddle target is corresponding in Framet+1The predicted position information of middle corresponding target continues track, and by track confidence level
Timer+1;Work as FrametWhen not being the last frame of video, Frame is updatedt+1Step 4 is executed as present frame, is otherwise executed
Step 5;
Step 5: the track in Ts being screened, complete trajectory list TA is obtained, sets crossing number and to every in TA
The terminus of track is clustered, and cluster centre point set and road center point are obtained;
Step 6: the cluster centre point set and road center point obtained according to step 5 carries out subregion to crossing, then counts
It calculates the angle of every track and every track is encoded according to the subregion at crossing, obtain the complete trajectory list for having directional information
TB carries out counting statistics to TB;
Step 7: the counting statistics obtain to step 6 using Webster timing method as a result, carry out calculating total cycle time
And respectively to signal timing, to obtain the telecommunication flow information of traffic scene video.
Specifically, step 1 includes following sub-step:
Step 1.1: as shown in Figures 2 and 3, acquire the video of traffic scene, obtain 5000 comprising bus, truck,
The video interception of the sample image of the targets such as car, motorcycle, bicycle, pedestrian;
Step 1.2: video interception being marked using image labeling tool, mark includes carrying out target to the target in image
Target position in classification and image is labeled, and the video interception after mark is as sample set.
Preferably, the video interception after mark is scaled to the size of 720 × 480 sizes, facilitates processing.
Specifically, step 2 includes following sub-step:
As shown in Figure 4 and shown in Fig. 5, the sample set that step 1 obtains is trained using YOLOV3 algorithm, is detected
The video input detection model of traffic scene is obtained the testing result of the Pixel Information and target of image in each frame by model
Information, wherein the t frame of video is expressed as Framet, t expression frame number value is positive integer, ItIndicate the pixel of the image of t frame
Information, the ItWidth, height and area and Pixel Information including picture, provide basis, DB for clarification of objectivet
Indicate the testing result of t frame, and DBt={ BBi, i=1,2 ..., n }, wherein BBiIndicate that t frame detects i-th of target information,
The testing result information of target in each frame is obtained, the testing result information of the target includes, in target detection envelope frame
Point coordinate (Centx, Centy), width, height, the area of target detection envelope frame;
DBtIt can be sky, representative does not detect target in current image frame.
Finally we are by ItWith DBtIt binds to FrametResult as detection-phase exports, and continues to locate for follow-up phase
Reason, obtains detection model.
Specifically, matched process in step 4 are as follows: calculate every track TiEndpoint information BlastIt is corresponded to in present frame
The testing result information BB of targetiDuplication Overlap, Duplication BlastAnd BBiTwo corresponding rectangle frame overlay regions
Then the ratio of the area in domain and total occupied area calculates the pixel distance Dis of the central point of two boundary rectangle frames, finally leads to
The weighted results for crossing Overlap and Dis calculate BlastAnd BBiIt is considered as the matching degree MatchValue of the same target, if
It is more than or equal to threshold value then successful match with degree, otherwise it fails to match, and the value range of the MatchValue is [0,1].
Preferably, the threshold value of MatchValue is set as 0.7 in step 4.
Specifically, final on trajectory information matches fail in step 4, Frame is obtained using KCF algorithmtMiddle target corresponds to
Framet+1The predicted position information of middle corresponding target includes following two situation:
If obtaining Framet+1The predicted position information update of existing target is then by the predicted position information of middle target
Framet+1Middle final on trajectory information continues track, and by Timer+1;
If not obtaining Framet+1The predicted position information of middle target, then replicate FrametFinal on trajectory information conduct
Framet+1Final on trajectory information, continue track, and by Timer+1.
Specifically, step 5 specifically includes following sub-step:
Step 5.1: the track in Ts being screened, the screening conditions are as follows: as the Timer > 30 or rail of selected track
When the midpoint coordinates of the target detection envelope frame of mark endpoint information is located at video boundaries, by selected track from interim trajectory lists Ts
Middle deletion, and selected track is saved in complete trajectory list TA, complete trajectory list TA is obtained, obtains vehicle as shown in Figure 1
Track;
Step 5.2: setting crossing number and calculated to cluster number k, and by the terminus input K-means of every track in TA
Method is clustered, and cluster centre point set PA={ P is exportedw, w=1 .., k }, PwIt is w-th of cluster centre point, takes cluster centre
The central point of set PA is road center point PCent.
Preferably, as shown in fig. 7, there is following situations when traffic scene is respectively as follows: crossroad, T-shaped road in step 5.2
When mouth and road, setting is respectively k=4, k=3 and k=2, then by k cluster centre PA of acquisition, according to three kinds of differences
The case where obtain the central point of road in its video scene, if crossroad, four sides that take its four cluster centre points to constitute
The diagonal line intersection point of shape;If T-shaped road junction, the geometric center for the triangle for taking three of them cluster centre point to constitute;If road
Crossing takes two cluster centre point to be linked to be the midpoint of line segment.
Specifically, step 6 includes following sub-step:
Step 6.1: the cluster centre point set PA and road center point PCent obtained according to step 5 establishes polar coordinates
System, using PCent as the pole of polar coordinate system and PCent=(x1, y1), a ray is drawn as polar axis using direction horizontally to the right,
The positive direction counterclockwise for angle is taken, polar angle coordinate θ unit is degree, and range is (0,360), if another point P in polar coordinates
=(x2, y2) it is calculated by the following formula the θ value of P:
As x2 > x1 and y2 > y1, θ=360-180/pi*arctan ((y2-y1)/(x2-x1));
As x2=x1 and y2 > y1, θ=270;
As x2 < x1 and y2 > y1, θ=180-180/pi*arctan ((y2-y1)/(x2-x1));
As x2 < x1 and y2=y1, θ=180;
As x2 < x1 and y2 < y1, θ=180-180/pi*arCtan ((y2-y1)/(x2-x1));
As x2=x1 and y2 < y1, θ=90;
As x2 > x1 and y2 < y1, θ=- 180/pi*arctan ((y2-y1)/(x2-x1));
As x2 > x1 and y2=y1, θ=0;
Step 6.2: taking P ∈ PA, using the formula in step 6.1, obtain the θ value of each cluster centre point in PA, pass through
Completion is ranked up to the θ value of each cluster centre point, subregion is carried out to crossing;
Step 6.3: taking the terminus information of the every track P ∈, the angle of every track is calculated using the formula in step 6.1
Spend and simultaneously every track encode according to the subregion at crossing, obtaining has the complete trajectory list TB of directional information, to TB according to left-hand rotation,
It turns right and three sides of straight trip carries out counting statistics.
Preferably, the traffic scene in step 6.2 is crossroad, and cluster centre point number k=4 calculates four and gathers
Class central point corresponding angle θ1、θ2、θ3、θ4, to θ1、θ2、θ3、θ4It is ranked up from small to large: 0 <=θ1< θ2< θ3< θ4<=
360, then calculateWherein θ1′、θ2′、θ3′、θ4' it is pair
Current scene environment does the primary parameter of subregion, and is ranked up from small to large to θ ': 0 <=θ '1< θ '2< θ '3< θ '4<
=360, by (θ '1, θ '2) it is divided into the area A, (θ '2, θ '3) it is divided into the area B, (θ '3, θ '4) it is divided into the area C, (θ '4, 360) simultaneously (0,
θ′1) it is divided into the area D, it completes to carry out subregion to crossing.Preferably, as shown in Figure 1 can the rest may be inferred, T-shaped road junction is divided into three
A area (ABC) and road are divided into the area Liang Ge (AB).
Table 1 is the sample instantiation for the detailed traffic stream statistics result that the traffic video in a hour obtains
Embodiment:
If Fig. 8 is the creation and emulation by Synchro to actual scene traffic flow model.The each lane in crossroad
Wagon flow numerical quantity be to be realized by artificial counting, according to the road conditions under actual traffic scene will be saturated the magnitude of traffic flow, road
Road is canalized in the input systems such as scheme, each crossing different directions vehicle flowrate, as shown in Figure 9.
Each phase lane hour vehicle flowrate combination lane mouthful actual conditions in crossroad are applied to belisha beacon timing side
The design of case carries out the calculating of signal by the effective Webster method in current signal timing dial field, using Webster method
When must be known by each phase signals relevant parameter.Since the vehicle of right-hand rotation is not controlled and traditional method of counting can not by signal lamp
Lane mouthful traveling each flow amount is clearly distinguished, only the lanes vehicle amount sum.It is therefore assumed that only knowing each phase vehicle at present
Road Travel vehicle flow sum, the design of signal time distributing conception is carried out by Webster method, design result is as shown in Figure 10.Immediately
Using this patent gram counts as a result, after ignoring right-turning vehicles by Webster method carry out signal time distributing conception design, if
It is as shown in figure 11 to count result.
Finally, being commented in order to illustrate the simulation result of model in above-mentioned two situations is carried out system the advantages of this patent scheme
Estimate, exports assessment report, partial report is as shown in Figure 12,13,14 and 15.
To distinguish crossing vehicle heading by this patent model, i.e., each driving direction vehicle flowrate knows for Figure 14,15
In the case where the assessment result that carries out.LOS (is serviced water according to the vehicle driving situation at crossing by the current handbook of U.S.'s traffic ability
It is flat) it is divided into A~H totally 8 grades.
By Figure 12 and Figure 14 comparison discovery, vehicle in the timing scenario outcomes after distinguishing crossing vehicle heading
The LOS grade in road significantly improves.And by comparison obviously by each in the timing scheme of differentiation crossing vehicle heading
Total Delay (vehicle delay) is significantly reduced, and mean delay reduces 2.2s.By the optimization of patent model, so that entirely
Grading inside crossing also improves a grade, is promoted to A grades by B grades, as a result by Figure 13 and Figure 15
The comparison of Ihtersection LOS grade can be seen that.