CN109584558A - A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals - Google Patents

A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals Download PDF

Info

Publication number
CN109584558A
CN109584558A CN201811540864.7A CN201811540864A CN109584558A CN 109584558 A CN109584558 A CN 109584558A CN 201811540864 A CN201811540864 A CN 201811540864A CN 109584558 A CN109584558 A CN 109584558A
Authority
CN
China
Prior art keywords
frame
information
target
track
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811540864.7A
Other languages
Chinese (zh)
Inventor
宋焕生
戴喆
贾金明
张朝阳
侯景严
云旭
李润青
王璇
武非凡
梁浩翔
孙士杰
刘莅辰
唐心瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201811540864.7A priority Critical patent/CN109584558A/en
Publication of CN109584558A publication Critical patent/CN109584558A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

本发明属于智能交通领域,具体涉及一种面向城市交通信号配时的交通流统计方法,采用图像处理技术对视频中的交通目标进行检测及跟踪,获取其轨迹信息,然后通过对轨迹信息和视频场景信息进行分析处理,提取出每条轨迹的起终点坐标进行聚类,获取场景的分区信息,最终获取详细的交通流信息。本发明具有更好的精度和数据的丰富度,提供更丰富的交通参数信息,能用于事故的预警、预防拥堵和自动路径规划,尤其是针对车流量较大场景复杂的情形,本发明提出的方法仍然有较好的效果。同时,通过获得十字路口不同时段的交通流信息还可以进行信号配时,带来了显著的经济效益并且能够提高交通通行效率。

The invention belongs to the field of intelligent transportation, and in particular relates to a traffic flow statistics method for urban traffic signal timing. Image processing technology is used to detect and track traffic targets in videos, to obtain their trajectory information, and then to detect and track traffic targets in videos by using image processing technology. The scene information is analyzed and processed, the coordinates of the start and end points of each track are extracted for clustering, the partition information of the scene is obtained, and finally the detailed traffic flow information is obtained. The invention has better accuracy and richness of data, provides more abundant traffic parameter information, and can be used for early warning of accidents, prevention of congestion and automatic path planning, especially for situations with large traffic flow and complex scenarios, the invention proposes method still has good results. At the same time, by obtaining the traffic flow information at different time periods at the intersection, signal timing can also be performed, which brings significant economic benefits and can improve the traffic efficiency.

Description

A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals
Technical field
The invention belongs to intelligent transportation fields, and in particular to a kind of traffic flow statistics side towards Optimization Control for Urban Traffic Signals Method.
Background technique
Vehicle fleet size in estimation traffic video sequence is a vital task in intelligent transportation system, can be traffic It manages and controls and reliable information is provided.In traditional intelligent transportation system, vehicle count is complete by special sensor At, such as magnet ring, microwave or ultrasonic detector.However these sensors have some limitations, such as acquisition data excessively Simple and installation cost is high.With the development of image processing techniques, compared to traditional sensor, method, the vehicle based on video Method of counting starts to be paid close attention to and paid attention to by people.
Vehicle count method using machine vision includes: detection, tracking and trajectory processing.Method of counting existing at present It can be mainly divided into three classes: the method based on recurrence, the method based on cluster and (matching) method based on detection.Wherein, base It is intended to learn regression function using the feature of detection zone in the method for recurrence, the method based on cluster tracks clarification of objective Track is obtained, and cluster is carried out to object count to track.And in method of counting mentioned above, have some common The problem of: video angle is restricted, there is track of vehicle complexity uncertainty can not handle complex scene etc..
Summary of the invention
It is restricted for video angle existing in the prior art, calculating speed is slow and can not handle asking for complex scene Topic, the traffic flow statistics method towards Optimization Control for Urban Traffic Signals that the present invention provides a kind of include the following steps:
Step 1: acquiring the video of traffic scene, obtain video interception, classification annotation is carried out to video interception, after mark Video interception as sample set;
Step 2: the sample set that step 1 obtains being trained using YOLO V3 algorithm, detection model is obtained, by traffic The video input detection model of scene obtains the testing result information of the Pixel Information and target of image in each frame, wherein view The t frame of frequency is expressed as Framet, t expression frame number value is positive integer;
Step 3: creating interim trajectory lists Ts, Ts is sky at this time, the video for the traffic scene that read step 2 obtains Frame1As present frame, to Frame1In each target for detecting establish new track, and Ts are added in all new tracks, more New Frame2As present frame, by Frame1In each target testing result information as present frame Frame2Corresponding rail Mark endpoint information, enters step 4;
Step 4: setting present frame as Framet, then next frame is Framet+1, by FrametIn every final on trajectory information with FrametThe testing result information of target matched: by FrametThe testing result information conduct of the target of middle successful match Framet+1In corresponding final on trajectory information, continue track;By FrametThe inspection of the middle object detection results target that it fails to match Starting point of the result information as new track is surveyed, new track is created and is added in Ts, at this time FrametIn the starting point of new track be Framet+1Final on trajectory information;By FrametThe target exploitation KCF algorithm of middle final on trajectory information matches failure obtains FrametMiddle target is corresponding in Framet+1The predicted position information of middle corresponding target continues track, and by track confidence level Timer+1;Work as FrametWhen not being the last frame of video, Frame is updatedt+1Step 4 is executed as present frame, is otherwise executed Step 5;
Step 5: the track in Ts being screened, complete trajectory list TA is obtained, sets crossing number and to every in TA The terminus of track is clustered, and cluster centre point set and road center point are obtained;
Step 6: the cluster centre point set and road center point obtained according to step 5 carries out subregion to crossing, then counts It calculates the angle of every track and every track is encoded according to the subregion at crossing, obtain the complete trajectory list for having directional information TB carries out counting statistics to TB;
Step 7: the counting statistics obtain to step 6 using Webster timing method as a result, carry out calculating total cycle time And respectively to signal timing, to obtain the telecommunication flow information of traffic scene video.
Further, step 1 includes following sub-step:
Step 1.1: acquire the video of traffic scene, obtain 5000 comprising bus, truck, car, motorcycle, from The video interception of the sample image of the targets such as driving, pedestrian;
Step 1.2: video interception being marked using image labeling tool, the mark includes carrying out to the target in image Target position in target category and image is labeled, and the video interception after mark is as sample set.
Further, step 2 includes following sub-step:
The sample set that step 1 obtains is trained using YOLOV3 algorithm, obtains detection model, by the view of traffic scene Frequency input detection model, obtains the testing result information of the Pixel Information and target of image in each frame, wherein the t of video Frame is expressed as Framet, t expression frame number value is positive integer, ItIndicate the Pixel Information of the image of t frame, the ItIncluding picture Width, height and area and Pixel Information, DBtIndicate the testing result of t frame, and DBt={ BBi, i=1,2 ..., n }, Wherein BBiIt indicates that t frame detects i-th of target information, obtains the testing result information of target in each frame, the inspection of the target Surveying result information includes the midpoint coordinates of target detection envelope frame, width, height, the area of target detection envelope frame.
Further, matched process in step 4 are as follows: calculate every track TiEndpoint information BlastWith it is right in present frame Answer the testing result information BB of targetiDuplication Overlap, Duplication BlastAnd BBiCorresponding two rectangle frames overlapping Then the ratio of the area in region and total occupied area calculates the pixel distance Dis of the central point of two boundary rectangle frames, finally B is calculated by the weighted results of Overlap and DislastAnd BBiIt is considered as the matching degree MatchValue of the same target, if Matching degree is more than or equal to threshold value then successful match, and otherwise it fails to match, and the value range of the MatchValue is [0,1].
Further, MatchValue described in step 4 is set as 0.7.
Further, final on trajectory information matches fail in step 4, obtain Frame using KCF algorithmtMiddle target is corresponding In Framet+1The predicted position information of middle corresponding target includes following two situation:
If obtaining Framet+1The predicted position information update of existing target is then by the predicted position information of middle target Framet+1Middle final on trajectory information continues track, and by Timer+1;
If not obtaining Framet+1The predicted position information of middle target, then replicate FrametFinal on trajectory information conduct Framet+1Final on trajectory information, continue track, and by Timer+1.
Further, step 5 specifically includes following sub-step:
Step 5.1: the track in Ts being screened, the screening conditions are as follows: as the Timer > 30 or rail of selected track When the midpoint coordinates of the target detection envelope frame of mark endpoint information is located at video boundaries, by selected track from interim trajectory lists Ts Middle deletion, and selected track is saved in complete trajectory list TA, obtain complete trajectory list TA;
Step 5.2: setting crossing number and calculated to cluster number k, and by the terminus input K-means of every track in TA Method is clustered, and cluster centre point set PA={ P is exportedw, w=1 .., k }, PwIt is w-th of cluster centre point, takes cluster centre The central point of set PA is road center point PCent.
Further, step 6 includes following sub-step:
Step 6.1: the cluster centre point set PA and road center point PCent obtained according to step 5 establishes polar coordinates System, using PCent as the pole of polar coordinate system and PCent=(x1, y1), a ray is drawn as polar axis using direction horizontally to the right, The positive direction counterclockwise for angle is taken, polar angle coordinate θ unit is degree, and range is (0,360), if another point P in polar coordinates =(x2, y2) it is calculated by the following formula the θ value of P:
As x2 > x1 and y2 > y1, θ=360-180/pi*arctan ((y2-y1)/(x2-x1));
As x2=x1 and y2 > y1, θ=270;
As x2 < x1 and y2 > y1, θ=180-180/pi*arctan ((y2-y1)/(x2-x1));
As x2 < x1 and y2=y1, θ=180;
As x2 < x1 and y2 < y1, θ=180-180/pi*arctan ((y2-y1)/(x2-x1));
As x2=x1 and y2 < y1, θ=90;
As x2 > x1 and y2 < y1, θ=- 180/pi*arctan ((y2-y1)/(x2-x1));
As x2 > x1 and y2=y1, θ=0;
Step 6.2: taking P ∈ PA, using the formula in step 6.1, obtain the θ value of each cluster centre point in PA, pass through Completion is ranked up to the θ value of each cluster centre point, subregion is carried out to crossing;
Step 6.3: taking the terminus information of the every track P ∈, the angle of every track is calculated using the formula in step 6.1 Spend and simultaneously every track encode according to the subregion at crossing, obtaining has the complete trajectory list TB of directional information, to TB according to left-hand rotation, It turns right and three sides of straight trip carries out counting statistics.
Further, the traffic scene in step 6.2 is crossroad, and cluster centre point number k=4 calculates four A cluster centre point corresponding angle θ1、θ2、θ3、θ4, to θ1、θ2、θ3、θ4It is ranked up from small to large: 0 <=θ1< θ2< θ3< θ4 Then <=360 calculate Wherein θ1′、θ2′、θ3′、 θ4' it is the primary parameter of subregion to be done to current scene environment, and be ranked up from small to large to θ ': 0 <=θ '1< θ '2< θ '3 < θ '4<=360, by (θ '1, θ '2) it is divided into the area A, (θ '2, θ3) it is divided into the area B, (θ '3, θ '4) it is divided into the area C, (θ '4, 360) And (0, θ '1) it is divided into the area D, it completes to carry out subregion to crossing.
The present invention can bring it is following the utility model has the advantages that
The present invention has the richness of better precision and data, provides richer traffic parameter information, such as detects vehicle Type, density, speed and traffic accident, and cost of implementation is low, and installation and maintenance are simple, and the present invention can be used for the pre- of accident Alert, prevention congestion and automatic path planning, the situation in particular for vehicle flowrate compared with large scene complexity, method proposed by the present invention Still there is preferable effect.Meanwhile the telecommunication flow information by obtaining crossroad different periods can also carry out signal timing dial, It brings significant economic benefit and can be improved traffic traffic efficiency.
Detailed description of the invention
Fig. 1 is the regional code sample image of traffic scene;
Fig. 2 is traffic scene sample image;
Fig. 3 is that sample marks example image;
Fig. 4 is that deep learning training process loses curve image;
Fig. 5 is deep learning detection result image.
Fig. 6 is that target detection tracking result track shows image;
Fig. 7 (a) is the region division sample instantiation figure of crossroad;
Fig. 7 (b) is the region division sample instantiation figure in T-shaped road junction;
Fig. 8 is actual traffic flow field scape illustraton of model;
Fig. 9 is actual traffic scenario parameters input figure;
Figure 10 is the timing scheme for not distinguishing crossing wagon flow driving direction;
Figure 11 is the timing scheme for distinguishing crossing wagon flow driving direction;
Figure 12 is the timing scheme evaluation result 1 for not distinguishing crossing wagon flow driving direction;
Figure 13 is the timing scheme evaluation result 2 for not distinguishing crossing wagon flow driving direction;
Figure 14 is the timing scheme evaluation result 1 for distinguishing crossing wagon flow driving direction;
Figure 15 is the timing scheme evaluation result 2 for distinguishing crossing wagon flow driving direction.
Specific embodiment
The following provides a specific embodiment of the present invention, it should be noted that the invention is not limited to implement in detail below Example, all equivalent transformations made on the basis of the technical solutions of the present application each fall within protection scope of the present invention.
A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals, includes the following steps:
Step 1: acquiring the video of traffic scene, obtain video interception, classification annotation is carried out to video interception, after mark Video interception as sample set;
Step 2: the sample set that step 1 obtains being trained using YOLO V3 algorithm, detection model is obtained, by traffic The video input detection model of scene obtains the testing result information of the Pixel Information and target of image in each frame, wherein view The t frame of frequency is expressed as Framet, t expression frame number value is positive integer;
Step 3: creating interim trajectory lists Ts, Ts is sky at this time, the video for the traffic scene that read step 2 obtains Frame1As present frame, to Frame1In each target for detecting establish new track, and Ts are added in all new tracks, more New Frame2As present frame, by Frame1In each target testing result information as present frame Frame2Corresponding rail Mark endpoint information, enters step 4;
Step 4: setting present frame as Framet, then next frame is Framet+1, by FrametIn every final on trajectory information with FrametThe testing result information of target matched: by FrametThe testing result information conduct of the target of middle successful match Framet+1In corresponding final on trajectory information, continue track;By FrametThe inspection of the middle object detection results target that it fails to match Starting point of the result information as new track is surveyed, new track is created and is added in Ts, at this time FrametIn the starting point of new track be Framet+1Final on trajectory information;By FrametThe target exploitation KCF algorithm of middle final on trajectory information matches failure obtains FrametMiddle target is corresponding in Framet+1The predicted position information of middle corresponding target continues track, and by track confidence level Timer+1;Work as FrametWhen not being the last frame of video, Frame is updatedt+1Step 4 is executed as present frame, is otherwise executed Step 5;
Step 5: the track in Ts being screened, complete trajectory list TA is obtained, sets crossing number and to every in TA The terminus of track is clustered, and cluster centre point set and road center point are obtained;
Step 6: the cluster centre point set and road center point obtained according to step 5 carries out subregion to crossing, then counts It calculates the angle of every track and every track is encoded according to the subregion at crossing, obtain the complete trajectory list for having directional information TB carries out counting statistics to TB;
Step 7: the counting statistics obtain to step 6 using Webster timing method as a result, carry out calculating total cycle time And respectively to signal timing, to obtain the telecommunication flow information of traffic scene video.
Specifically, step 1 includes following sub-step:
Step 1.1: as shown in Figures 2 and 3, acquire the video of traffic scene, obtain 5000 comprising bus, truck, The video interception of the sample image of the targets such as car, motorcycle, bicycle, pedestrian;
Step 1.2: video interception being marked using image labeling tool, mark includes carrying out target to the target in image Target position in classification and image is labeled, and the video interception after mark is as sample set.
Preferably, the video interception after mark is scaled to the size of 720 × 480 sizes, facilitates processing.
Specifically, step 2 includes following sub-step:
As shown in Figure 4 and shown in Fig. 5, the sample set that step 1 obtains is trained using YOLOV3 algorithm, is detected The video input detection model of traffic scene is obtained the testing result of the Pixel Information and target of image in each frame by model Information, wherein the t frame of video is expressed as Framet, t expression frame number value is positive integer, ItIndicate the pixel of the image of t frame Information, the ItWidth, height and area and Pixel Information including picture, provide basis, DB for clarification of objectivet Indicate the testing result of t frame, and DBt={ BBi, i=1,2 ..., n }, wherein BBiIndicate that t frame detects i-th of target information, The testing result information of target in each frame is obtained, the testing result information of the target includes, in target detection envelope frame Point coordinate (Centx, Centy), width, height, the area of target detection envelope frame;
DBtIt can be sky, representative does not detect target in current image frame.
Finally we are by ItWith DBtIt binds to FrametResult as detection-phase exports, and continues to locate for follow-up phase Reason, obtains detection model.
Specifically, matched process in step 4 are as follows: calculate every track TiEndpoint information BlastIt is corresponded to in present frame The testing result information BB of targetiDuplication Overlap, Duplication BlastAnd BBiTwo corresponding rectangle frame overlay regions Then the ratio of the area in domain and total occupied area calculates the pixel distance Dis of the central point of two boundary rectangle frames, finally leads to The weighted results for crossing Overlap and Dis calculate BlastAnd BBiIt is considered as the matching degree MatchValue of the same target, if It is more than or equal to threshold value then successful match with degree, otherwise it fails to match, and the value range of the MatchValue is [0,1].
Preferably, the threshold value of MatchValue is set as 0.7 in step 4.
Specifically, final on trajectory information matches fail in step 4, Frame is obtained using KCF algorithmtMiddle target corresponds to Framet+1The predicted position information of middle corresponding target includes following two situation:
If obtaining Framet+1The predicted position information update of existing target is then by the predicted position information of middle target Framet+1Middle final on trajectory information continues track, and by Timer+1;
If not obtaining Framet+1The predicted position information of middle target, then replicate FrametFinal on trajectory information conduct Framet+1Final on trajectory information, continue track, and by Timer+1.
Specifically, step 5 specifically includes following sub-step:
Step 5.1: the track in Ts being screened, the screening conditions are as follows: as the Timer > 30 or rail of selected track When the midpoint coordinates of the target detection envelope frame of mark endpoint information is located at video boundaries, by selected track from interim trajectory lists Ts Middle deletion, and selected track is saved in complete trajectory list TA, complete trajectory list TA is obtained, obtains vehicle as shown in Figure 1 Track;
Step 5.2: setting crossing number and calculated to cluster number k, and by the terminus input K-means of every track in TA Method is clustered, and cluster centre point set PA={ P is exportedw, w=1 .., k }, PwIt is w-th of cluster centre point, takes cluster centre The central point of set PA is road center point PCent.
Preferably, as shown in fig. 7, there is following situations when traffic scene is respectively as follows: crossroad, T-shaped road in step 5.2 When mouth and road, setting is respectively k=4, k=3 and k=2, then by k cluster centre PA of acquisition, according to three kinds of differences The case where obtain the central point of road in its video scene, if crossroad, four sides that take its four cluster centre points to constitute The diagonal line intersection point of shape;If T-shaped road junction, the geometric center for the triangle for taking three of them cluster centre point to constitute;If road Crossing takes two cluster centre point to be linked to be the midpoint of line segment.
Specifically, step 6 includes following sub-step:
Step 6.1: the cluster centre point set PA and road center point PCent obtained according to step 5 establishes polar coordinates System, using PCent as the pole of polar coordinate system and PCent=(x1, y1), a ray is drawn as polar axis using direction horizontally to the right, The positive direction counterclockwise for angle is taken, polar angle coordinate θ unit is degree, and range is (0,360), if another point P in polar coordinates =(x2, y2) it is calculated by the following formula the θ value of P:
As x2 > x1 and y2 > y1, θ=360-180/pi*arctan ((y2-y1)/(x2-x1));
As x2=x1 and y2 > y1, θ=270;
As x2 < x1 and y2 > y1, θ=180-180/pi*arctan ((y2-y1)/(x2-x1));
As x2 < x1 and y2=y1, θ=180;
As x2 < x1 and y2 < y1, θ=180-180/pi*arCtan ((y2-y1)/(x2-x1));
As x2=x1 and y2 < y1, θ=90;
As x2 > x1 and y2 < y1, θ=- 180/pi*arctan ((y2-y1)/(x2-x1));
As x2 > x1 and y2=y1, θ=0;
Step 6.2: taking P ∈ PA, using the formula in step 6.1, obtain the θ value of each cluster centre point in PA, pass through Completion is ranked up to the θ value of each cluster centre point, subregion is carried out to crossing;
Step 6.3: taking the terminus information of the every track P ∈, the angle of every track is calculated using the formula in step 6.1 Spend and simultaneously every track encode according to the subregion at crossing, obtaining has the complete trajectory list TB of directional information, to TB according to left-hand rotation, It turns right and three sides of straight trip carries out counting statistics.
Preferably, the traffic scene in step 6.2 is crossroad, and cluster centre point number k=4 calculates four and gathers Class central point corresponding angle θ1、θ2、θ3、θ4, to θ1、θ2、θ3、θ4It is ranked up from small to large: 0 <=θ1< θ2< θ3< θ4<= 360, then calculateWherein θ1′、θ2′、θ3′、θ4' it is pair Current scene environment does the primary parameter of subregion, and is ranked up from small to large to θ ': 0 <=θ '1< θ '2< θ '3< θ '4< =360, by (θ '1, θ '2) it is divided into the area A, (θ '2, θ '3) it is divided into the area B, (θ '3, θ '4) it is divided into the area C, (θ '4, 360) simultaneously (0, θ′1) it is divided into the area D, it completes to carry out subregion to crossing.Preferably, as shown in Figure 1 can the rest may be inferred, T-shaped road junction is divided into three A area (ABC) and road are divided into the area Liang Ge (AB).
Table 1 is the sample instantiation for the detailed traffic stream statistics result that the traffic video in a hour obtains
Embodiment:
If Fig. 8 is the creation and emulation by Synchro to actual scene traffic flow model.The each lane in crossroad Wagon flow numerical quantity be to be realized by artificial counting, according to the road conditions under actual traffic scene will be saturated the magnitude of traffic flow, road Road is canalized in the input systems such as scheme, each crossing different directions vehicle flowrate, as shown in Figure 9.
Each phase lane hour vehicle flowrate combination lane mouthful actual conditions in crossroad are applied to belisha beacon timing side The design of case carries out the calculating of signal by the effective Webster method in current signal timing dial field, using Webster method When must be known by each phase signals relevant parameter.Since the vehicle of right-hand rotation is not controlled and traditional method of counting can not by signal lamp Lane mouthful traveling each flow amount is clearly distinguished, only the lanes vehicle amount sum.It is therefore assumed that only knowing each phase vehicle at present Road Travel vehicle flow sum, the design of signal time distributing conception is carried out by Webster method, design result is as shown in Figure 10.Immediately Using this patent gram counts as a result, after ignoring right-turning vehicles by Webster method carry out signal time distributing conception design, if It is as shown in figure 11 to count result.
Finally, being commented in order to illustrate the simulation result of model in above-mentioned two situations is carried out system the advantages of this patent scheme Estimate, exports assessment report, partial report is as shown in Figure 12,13,14 and 15.
To distinguish crossing vehicle heading by this patent model, i.e., each driving direction vehicle flowrate knows for Figure 14,15 In the case where the assessment result that carries out.LOS (is serviced water according to the vehicle driving situation at crossing by the current handbook of U.S.'s traffic ability It is flat) it is divided into A~H totally 8 grades.
By Figure 12 and Figure 14 comparison discovery, vehicle in the timing scenario outcomes after distinguishing crossing vehicle heading The LOS grade in road significantly improves.And by comparison obviously by each in the timing scheme of differentiation crossing vehicle heading Total Delay (vehicle delay) is significantly reduced, and mean delay reduces 2.2s.By the optimization of patent model, so that entirely Grading inside crossing also improves a grade, is promoted to A grades by B grades, as a result by Figure 13 and Figure 15 The comparison of Ihtersection LOS grade can be seen that.

Claims (9)

1.一种面向城市交通信号配时的交通流统计方法,包括如下步骤:1. A traffic flow statistical method for urban traffic signal timing, comprising the following steps: 步骤1:采集交通场景的视频,获得视频截图,对视频截图进行分类标注,把标注后的视频截图作为样本集;Step 1: Collect the video of the traffic scene, obtain the video screenshot, classify and label the video screenshot, and use the marked video screenshot as a sample set; 步骤2:将步骤1获得的样本集利用YOLO V3算法进行训练,得到检测模型,将交通场景的视频输入检测模型,得到每一帧中图像的像素信息和目标的检测结果信息,其中,视频的第t帧表示为Framet,t表示帧号取值为正整数;Step 2: Use the YOLO V3 algorithm to train the sample set obtained in Step 1 to obtain a detection model, input the video of the traffic scene into the detection model, and obtain the pixel information of the image in each frame and the detection result information of the target, among which, the video The t-th frame is represented as Frame t , and t represents that the frame number is a positive integer; 其特征在于,还包括以下步骤:It is characterized in that, also comprises the following steps: 步骤3:创建临时轨迹列表Ts,此时Ts为空,读取步骤2获得的交通场景的视频的Frame1作为当前帧,对Frame1中检测到的每个目标建立新轨迹,并将所有新轨迹加入Ts,更新Frame2作为当前帧,将Frame1中每个目标的检测结果信息作为当前帧Frame2的对应的轨迹终点信息,进入步骤4;Step 3: Create a temporary track list Ts, at this time Ts is empty, read Frame 1 of the video of the traffic scene obtained in Step 2 as the current frame, establish a new track for each target detected in Frame 1 , and transfer all new Add Ts to the track, update Frame 2 as the current frame, use the detection result information of each target in Frame 1 as the corresponding track end point information of the current frame Frame 2 , and go to step 4; 步骤4:设当前帧为Framet,则下一帧为Framet+1,将Framet中每条轨迹终点信息与Framet的目标的检测结果信息进行匹配:将Framet中匹配成功的目标的检测结果信息作为Framet+1中对应的轨迹终点信息,延续轨迹;将Framet中目标检测结果匹配失败的目标的检测结果信息作为新轨迹的起点,创建新轨迹添加至Ts中,此时Framet中新轨迹的起点为Framet+1的轨迹终点信息;将Framet中轨迹终点信息匹配失败的目标利用KCF算法获得Framet中目标对应在Framet+1中对应目标的预测位置信息,延续轨迹,并将轨迹置信度Timer+1;当Framet不是视频的最后一帧时,更新Framet+1作为当前帧执行步骤4,否则执行步骤5;Step 4: Set the current frame to be Frame t , then the next frame to be Frame t +1 , and match the end point information of each trajectory in Frame t with the detection result information of the target of Frame t : Match the successfully matched target in Frame t. The detection result information is used as the corresponding trajectory end point information in Frame t+1 , and the trajectory is continued; the detection result information of the target whose target detection result in Frame t fails to match is used as the starting point of the new trajectory, and a new trajectory is created and added to Ts. At this time, the Frame The starting point of the new trajectory in t is the trajectory end point information of Frame t+1 ; the target whose trajectory end point information in Frame t fails to match is obtained by using the KCF algorithm to obtain the predicted position information of the target in Frame t corresponding to the corresponding target in Frame t+1 , and continue track, and set the track confidence Timer+1; when Frame t is not the last frame of the video, update Frame t+1 as the current frame to perform step 4, otherwise perform step 5; 步骤5:对Ts中的轨迹进行筛选,获得完整轨迹列表TA,设定路口个数并对TA中每条轨迹的起终点进行聚类,获得聚类中心点集合和道路中心点;Step 5: Screen the trajectories in Ts to obtain a complete trajectory list TA, set the number of intersections and cluster the start and end points of each trajectory in TA to obtain the cluster center point set and the road center point; 步骤6:根据步骤5得到的聚类中心点集合和道路中心点,对路口进行分区,然后计算每条轨迹的角度并根据路口的分区对每条轨迹编码,获得有方向信息的完整轨迹列表TB,对TB进行计数统计;Step 6: According to the cluster center point set and the road center point obtained in step 5, divide the intersection, then calculate the angle of each track and encode each track according to the partition of the intersection, and obtain a complete track list TB with direction information , count and count TB; 步骤7:对步骤6得到的计数统计结果,使用Webster配时法进行计算出总周期时间以及各向信号灯时间,从而获取交通场景视频的交通流信息。Step 7: Use the Webster timing method to calculate the total cycle time and the time of each direction signal light for the counting and statistical results obtained in Step 6, so as to obtain the traffic flow information of the traffic scene video. 2.如权利要求1所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤1包括如下子步骤:2. The traffic flow statistical method for urban traffic signal timing as claimed in claim 1, wherein step 1 comprises the following substeps: 步骤1.1:采集交通场景的视频,获得5000张包含公交车、卡车、小汽车、摩托车、自行车、行人等目标的样本图像的视频截图;Step 1.1: Collect the video of the traffic scene, and obtain 5000 video screenshots of sample images containing objects such as buses, trucks, cars, motorcycles, bicycles, pedestrians, etc.; 步骤1.2:采用图像标注工具对视频截图标注,所述标注包括对图像中的目标进行目标类别以及图像中的目标位置进行标注,标注后的视频截图作为样本集。Step 1.2: Using an image annotation tool to annotate the video screenshots, the annotation includes labeling the target category in the image and the target position in the image, and the annotated video screenshots are used as a sample set. 3.如权利要求1所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤2包括如下子步骤:3. The traffic flow statistical method for urban traffic signal timing as claimed in claim 1, wherein step 2 comprises the following substeps: 将步骤1获得的样本集利用YOLOV3算法进行训练,得到检测模型,将交通场景的视频输入检测模型,得到每一帧中图像的像素信息和目标的检测结果信息,其中,视频的第t帧表示为Framet,t表示帧号取值为正整数,It表示t帧的图像的像素信息,所述It包括图片的宽度、高度、和面积以及像素信息,DBt表示t帧的检测结果,且DBt={BBi,i=1,2,…,n},其中BBi表示t帧检测到第i个目标信息,得到每一帧中目标的检测结果信息,所述目标的检测结果信息包括,目标检测包络框的中点坐标,目标检测包络框的宽度、高度、面积。The sample set obtained in step 1 is trained with the YOLOV3 algorithm to obtain a detection model, and the video of the traffic scene is input into the detection model to obtain the pixel information of the image in each frame and the detection result information of the target, where the t-th frame of the video represents the is Frame t , t indicates that the frame number is a positive integer, It indicates the pixel information of the image of the t frame, and the It includes the width, height, area and pixel information of the picture, and DB t indicates the detection result of the t frame , and DB t ={BB i ,i=1,2,...,n}, where BB i indicates that the ith target information is detected in frame t, and the detection result information of the target in each frame is obtained. The detection of the target The result information includes the coordinates of the midpoint of the target detection envelope, and the width, height, and area of the target detection envelope. 4.如权利要求1所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤4中匹配的过程为:计算每条轨迹Ti的终点信息Blast和当前帧中对应目标的检测结果信息BBi的重叠率Overlap,重叠率为Blast和BBi所对应的两个矩形框重叠区域的面积与总占有面积的比值,然后计算两个外接矩形框的中心点的像素距离Dis,最后通过Overlap与Dis的加权结果计算Blast和BBi被认为是同一个目标的匹配度MatchValue,若匹配度大于等于阈值则匹配成功,否则匹配失败,所述MatchValue的取值范围为[0,1]。4. the traffic flow statistical method for urban traffic signal timing as claimed in claim 1, it is characterized in that, the process of matching in step 4 is: calculate the end point information B last of each track T i and the corresponding target in the current frame The overlap rate of the detection result information BB i is Overlap, the overlap rate is the ratio of the area of the overlapping area of the two rectangular boxes corresponding to B last and BB i to the total occupied area, and then calculate the pixel distance between the center points of the two bounding rectangular boxes Dis, finally calculate the matching degree MatchValue of B last and BB i considered to be the same target through the weighted result of Overlap and Dis, if the matching degree is greater than or equal to the threshold, the matching is successful, otherwise the matching fails, and the value range of the MatchValue is [ 0,1]. 5.如权利要求4所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤4中所述MatchValue设置为0.7。5 . The traffic flow statistics method for urban traffic signal timing according to claim 4 , wherein the MatchValue in step 4 is set to 0.7. 6 . 6.如权利要求1所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤4中轨迹终点信息匹配失败,利用KCF算法获得Framet中目标对应在Framet+1中对应目标的预测位置信息包括以下两种情况:6. the traffic flow statistical method facing urban traffic signal timing as claimed in claim 1, it is characterized in that, in step 4, the trajectory end point information matching fails, utilizes KCF algorithm to obtain in Frame t target corresponding in Frame t+1 The predicted location information of the target includes the following two situations: 若获得了Framet+1中目标的预测位置信息,则将已有目标的预测位置信息更新为Framet+1中轨迹终点信息,延续轨迹,并将Timer+1;If the predicted position information of the target in Frame t+1 is obtained, update the predicted position information of the existing target to the trajectory end point information in Frame t+1 , continue the trajectory, and add Timer+1; 若未获得Framet+1中目标的预测位置信息,则复制Framet的轨迹终点信息作为Framet+1的轨迹终点信息,延续轨迹,并将Timer+1。If the predicted position information of the target in Frame t+1 is not obtained, copy the track end point information of Frame t as the track end point information of Frame t+1 , continue the track, and add Timer+1. 7.如权利要求1所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤5具体包括如下子步骤:7. The traffic flow statistical method for urban traffic signal timing as claimed in claim 1, wherein step 5 specifically comprises the following substeps: 步骤5.1:对Ts中的轨迹进行筛选,所述筛选条件为:当所选轨迹的Timer>30或轨迹终点信息的目标检测包络框的中点坐标位于视频边界时,将所选轨迹从临时轨迹列表Ts中删除,并将所选轨迹保存到完整轨迹列表TA中,获得完整轨迹列表TA;Step 5.1: Screen the tracks in Ts, and the screening conditions are: when the selected track has Timer>30 or the midpoint coordinates of the target detection envelope frame of the track end information are located at the video boundary, the selected track will be changed from the temporary Delete from the track list Ts, and save the selected track to the complete track list TA to obtain the complete track list TA; 步骤5.2:设定路口个数为聚类个数k,并将TA中每条轨迹的起终点输入K-means算法进行聚类,输出聚类中心点集合PA={Pw,w=1,..,k},Pw是第w个聚类中心点,取聚类中心集合PA的中心点为道路中心点PCent。Step 5.2: Set the number of intersections as the number of clusters k, and input the starting and ending points of each trajectory in the TA into the K-means algorithm for clustering, and output the cluster center point set PA={P w ,w=1, ..,k}, Pw is the wth cluster center point, and the center point of the cluster center set PA is taken as the road center point PCent. 8.如权利要求1所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤6包括如下子步骤:8. The traffic flow statistical method for urban traffic signal timing as claimed in claim 1, wherein step 6 comprises the following substeps: 步骤6.1:根据步骤5得到的聚类中心点集合PA和道路中心点PCent,建立极坐标系,以PCent为极坐标系的极点且PCent=(x1,y1),以水平向右的方向引一条射线作为极轴,取逆时针方向为角度的正方向,极角坐标θ单位为度,范围为(0,360),设极坐标内另一点P=(x2,y2)通过以下公式计算P的θ值:Step 6.1: According to the cluster center point set PA and the road center point PCent obtained in step 5, establish a polar coordinate system, take PCent as the pole of the polar coordinate system and PCent=(x 1 , y 1 ), in the horizontal rightward direction Introduce a ray as the polar axis, take the counterclockwise direction as the positive direction of the angle, the polar coordinate θ is in degrees, the range is (0,360), and set another point in the polar coordinate P=(x 2 , y 2 ) Calculated by the following formula Theta value of P: 当x2>x1并且y2>y1时,θ=360-180/pi*arctan((y2-y1)/(x2-x1));When x2>x1 and y2>y1, θ=360-180/pi*arctan((y2-y1)/(x2-x1)); 当x2=x1并且y2>y1时,θ=270;When x2=x1 and y2>y1, θ=270; 当x2<x1并且y2>y1时,θ=180-180/pi*arctan((y2-y1)/(x2-x1));When x2<x1 and y2>y1, θ=180-180/pi*arctan((y2-y1)/(x2-x1)); 当x2<x1并且y2=y1时,θ=180;When x2<x1 and y2=y1, θ=180; 当x2<x1并且y2<y1时,θ=180-180/pi*arctan((y2-y1)/(x2-x1));When x2<x1 and y2<y1, θ=180-180/pi*arctan((y2-y1)/(x2-x1)); 当x2=x1并且y2<y1时,θ=90;When x2=x1 and y2<y1, θ=90; 当x2>x1并且y2<y1时,θ=-180/pi*arctan((y2-y1)/(x2-x1));When x2>x1 and y2<y1, θ=-180/pi*arctan((y2-y1)/(x2-x1)); 当x2>x1并且y2=y1时,θ=0;When x2>x1 and y2=y1, θ=0; 步骤6.2:取P∈PA,利用步骤6.1中的公式,得到PA中每个聚类中心点的θ值,通过对每个聚类中心点的θ值进行排序完成对路口进行分区;Step 6.2: Take P∈PA, use the formula in step 6.1 to obtain the θ value of each cluster center point in PA, and complete the partition of the intersection by sorting the θ value of each cluster center point; 步骤6.3:取P∈每条轨迹的起终点信息,利用步骤6.1中的公式计算每条轨迹的角度并根据路口的分区对每条轨迹编码,获得有方向信息的完整轨迹列表TB,对TB按照左转、右转和直行三个方进行计数统计。Step 6.3: Take the starting and ending information of each track P ∈, calculate the angle of each track using the formula in step 6.1, and encode each track according to the partition of the intersection, and obtain a complete track list TB with direction information. Turn left, turn right and go straight for three counts. 9.如权利要求8所述的面向城市交通信号配时的交通流统计方法,其特在于,步骤6.2中的交通场景为十字路口,聚类中心点个数k=4,计算出四个聚类中心点对应角度θ1、θ2、θ3、θ4,对θ1、θ2、θ3、θ4从小到大进行排序:0<=θ1234<=360,然后计算其中θ1'、θ2'、θ3'、θ4'是对当前场景环境做分区的一次参数,并对θ'从小到大进行排序:0<=θ'1<θ'2<θ'3<θ'4<=360,将(θ'1,θ'2)划分为A区,(θ'2,θ'3)划分为B区,(θ'3,θ'4)划分为C区,(θ'4,360)并(0,θ'1)划分为D区,完成对路口进行分区。9. The traffic flow statistical method for urban traffic signal timing according to claim 8, wherein the traffic scene in step 6.2 is an intersection, the number of cluster center points k=4, and four clusters are calculated. Class center points correspond to angles θ 1 , θ 2 , θ 3 , θ 4 , and sort θ 1 , θ 2 , θ 3 , θ 4 from small to large: 0<=θ 1234 < =360, then calculate Among them, θ 1 ', θ 2 ', θ 3 ', θ 4 ' are primary parameters for partitioning the current scene environment, and θ' is sorted from small to large: 0<=θ' 1 <θ' 2 <θ' 3 <θ' 4 <=360, (θ' 1 , θ' 2 ) is divided into A zone, (θ' 2 , θ' 3 ) is divided into B zone, (θ' 3 , θ' 4 ) is divided into C zone Area, (θ' 4 , 360) and (0, θ' 1 ) are divided into D area to complete the division of the intersection.
CN201811540864.7A 2018-12-17 2018-12-17 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals Pending CN109584558A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811540864.7A CN109584558A (en) 2018-12-17 2018-12-17 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811540864.7A CN109584558A (en) 2018-12-17 2018-12-17 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals

Publications (1)

Publication Number Publication Date
CN109584558A true CN109584558A (en) 2019-04-05

Family

ID=65929718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811540864.7A Pending CN109584558A (en) 2018-12-17 2018-12-17 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals

Country Status (1)

Country Link
CN (1) CN109584558A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109935080A (en) * 2019-04-10 2019-06-25 武汉大学 A monitoring system and method for real-time calculation of traffic flow on a traffic line
CN110033479A (en) * 2019-04-15 2019-07-19 四川九洲视讯科技有限责任公司 Traffic flow parameter real-time detection method based on Traffic Surveillance Video
CN110319844A (en) * 2019-06-14 2019-10-11 武汉理工大学 For the method for intersection expression and bus or train route object matching under bus or train route cooperative surroundings
CN110633678A (en) * 2019-09-19 2019-12-31 北京同方软件有限公司 Rapid and efficient traffic flow calculation method based on video images
CN110706266A (en) * 2019-12-11 2020-01-17 北京中星时代科技有限公司 Aerial target tracking method based on YOLOv3
CN110728842A (en) * 2019-10-23 2020-01-24 江苏智通交通科技有限公司 Abnormal driving early warning method based on reasonable driving range of vehicles at intersection
CN111223310A (en) * 2020-01-09 2020-06-02 阿里巴巴集团控股有限公司 Information processing method and device and electronic equipment
CN111554105A (en) * 2020-05-29 2020-08-18 浙江科技学院 An intelligent flow recognition and statistics method for complex traffic intersections
CN111738056A (en) * 2020-04-27 2020-10-02 浙江万里学院 A blind spot target detection method for heavy trucks based on improved YOLO v3
CN111882861A (en) * 2020-06-06 2020-11-03 浙江工业大学 An online traffic incident perception system based on edge-cloud fusion
CN111915904A (en) * 2019-05-07 2020-11-10 阿里巴巴集团控股有限公司 Track processing method and device and electronic equipment
CN112258745A (en) * 2020-12-21 2021-01-22 上海富欣智能交通控制有限公司 Mobile authorization endpoint determination method, device, vehicle and readable storage medium
CN112652161A (en) * 2019-10-12 2021-04-13 阿里巴巴集团控股有限公司 Method and device for processing traffic flow path distribution information and electronic equipment
CN113052118A (en) * 2021-04-07 2021-06-29 上海浩方信息技术有限公司 Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera
CN113112827A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Intelligent traffic control method and intelligent traffic control system
CN113327248A (en) * 2021-08-03 2021-08-31 四川九通智路科技有限公司 Tunnel traffic flow statistical method based on video
CN113593219A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Traffic flow statistical method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049751A (en) * 2013-01-24 2013-04-17 苏州大学 Improved weighting region matching high-altitude video pedestrian recognizing method
CN104966045A (en) * 2015-04-02 2015-10-07 北京天睿空间科技有限公司 Video-based airplane entry-departure parking lot automatic detection method
AU2011352412B2 (en) * 2010-12-30 2016-07-07 Pelco Inc. Scene activity analysis using statistical and semantic feature learnt from object trajectory data
CN108320510A (en) * 2018-04-03 2018-07-24 深圳市智绘科技有限公司 One kind being based on unmanned plane video traffic information statistical method and system
CN108846854A (en) * 2018-05-07 2018-11-20 中国科学院声学研究所 A kind of wireless vehicle tracking based on motion prediction and multiple features fusion
CN108960286A (en) * 2018-06-01 2018-12-07 深圳市茁壮网络股份有限公司 A kind of target following localization method and device
CN109005409A (en) * 2018-07-27 2018-12-14 浙江工业大学 A kind of intelligent video coding method based on object detecting and tracking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011352412B2 (en) * 2010-12-30 2016-07-07 Pelco Inc. Scene activity analysis using statistical and semantic feature learnt from object trajectory data
CN103049751A (en) * 2013-01-24 2013-04-17 苏州大学 Improved weighting region matching high-altitude video pedestrian recognizing method
CN104966045A (en) * 2015-04-02 2015-10-07 北京天睿空间科技有限公司 Video-based airplane entry-departure parking lot automatic detection method
CN108320510A (en) * 2018-04-03 2018-07-24 深圳市智绘科技有限公司 One kind being based on unmanned plane video traffic information statistical method and system
CN108846854A (en) * 2018-05-07 2018-11-20 中国科学院声学研究所 A kind of wireless vehicle tracking based on motion prediction and multiple features fusion
CN108960286A (en) * 2018-06-01 2018-12-07 深圳市茁壮网络股份有限公司 A kind of target following localization method and device
CN109005409A (en) * 2018-07-27 2018-12-14 浙江工业大学 A kind of intelligent video coding method based on object detecting and tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙亚等: "基于视频的交通参数智能提取方法研究", 《科技创新导报》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109935080A (en) * 2019-04-10 2019-06-25 武汉大学 A monitoring system and method for real-time calculation of traffic flow on a traffic line
CN109935080B (en) * 2019-04-10 2021-07-16 武汉大学 A monitoring system and method for real-time calculation of traffic flow on a traffic line
CN110033479A (en) * 2019-04-15 2019-07-19 四川九洲视讯科技有限责任公司 Traffic flow parameter real-time detection method based on Traffic Surveillance Video
CN110033479B (en) * 2019-04-15 2023-10-27 四川九洲视讯科技有限责任公司 Traffic flow parameter real-time detection method based on traffic monitoring video
CN111915904A (en) * 2019-05-07 2020-11-10 阿里巴巴集团控股有限公司 Track processing method and device and electronic equipment
CN110319844A (en) * 2019-06-14 2019-10-11 武汉理工大学 For the method for intersection expression and bus or train route object matching under bus or train route cooperative surroundings
CN110319844B (en) * 2019-06-14 2022-12-27 武汉理工大学 Method for intersection expression and vehicle road target matching under vehicle road cooperative environment
CN110633678A (en) * 2019-09-19 2019-12-31 北京同方软件有限公司 Rapid and efficient traffic flow calculation method based on video images
CN110633678B (en) * 2019-09-19 2023-12-22 北京同方软件有限公司 Quick and efficient vehicle flow calculation method based on video image
CN112652161A (en) * 2019-10-12 2021-04-13 阿里巴巴集团控股有限公司 Method and device for processing traffic flow path distribution information and electronic equipment
CN110728842A (en) * 2019-10-23 2020-01-24 江苏智通交通科技有限公司 Abnormal driving early warning method based on reasonable driving range of vehicles at intersection
CN110728842B (en) * 2019-10-23 2021-10-08 江苏智通交通科技有限公司 Abnormal driving early warning method based on reasonable driving range of vehicles at intersection
CN110706266A (en) * 2019-12-11 2020-01-17 北京中星时代科技有限公司 Aerial target tracking method based on YOLOv3
CN110706266B (en) * 2019-12-11 2020-09-15 北京中星时代科技有限公司 Aerial target tracking method based on YOLOv3
CN111223310A (en) * 2020-01-09 2020-06-02 阿里巴巴集团控股有限公司 Information processing method and device and electronic equipment
CN111223310B (en) * 2020-01-09 2022-07-15 阿里巴巴集团控股有限公司 Information processing method and device and electronic equipment
CN111738056B (en) * 2020-04-27 2023-11-03 浙江万里学院 A heavy truck blind spot target detection method based on improved YOLO v3
CN111738056A (en) * 2020-04-27 2020-10-02 浙江万里学院 A blind spot target detection method for heavy trucks based on improved YOLO v3
CN111554105B (en) * 2020-05-29 2021-08-03 浙江科技学院 Intelligent traffic identification and statistics method for complex traffic intersection
CN111554105A (en) * 2020-05-29 2020-08-18 浙江科技学院 An intelligent flow recognition and statistics method for complex traffic intersections
CN111882861A (en) * 2020-06-06 2020-11-03 浙江工业大学 An online traffic incident perception system based on edge-cloud fusion
CN112258745A (en) * 2020-12-21 2021-01-22 上海富欣智能交通控制有限公司 Mobile authorization endpoint determination method, device, vehicle and readable storage medium
CN113052118A (en) * 2021-04-07 2021-06-29 上海浩方信息技术有限公司 Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera
CN113112827A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Intelligent traffic control method and intelligent traffic control system
CN113593219A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Traffic flow statistical method and device, electronic equipment and storage medium
CN113593219B (en) * 2021-06-30 2023-02-28 北京百度网讯科技有限公司 Traffic flow statistical method and device, electronic equipment and storage medium
CN113327248A (en) * 2021-08-03 2021-08-31 四川九通智路科技有限公司 Tunnel traffic flow statistical method based on video

Similar Documents

Publication Publication Date Title
CN109584558A (en) A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals
CN110717433A (en) A traffic violation analysis method and device based on deep learning
CN109859468A (en) Multilane traffic volume based on YOLOv3 counts and wireless vehicle tracking
CN113112830B (en) Signal-controlled intersection clearing method and system based on lidar and trajectory prediction
Song et al. Vehicle behavior analysis using target motion trajectories
US20240013553A1 (en) Infrastructure element state model and prediction
CN107316010A (en) A kind of method for recognizing preceding vehicle tail lights and judging its state
CN114333330B (en) Intersection event detection system based on road side edge holographic sensing
US12148219B2 (en) Method, apparatus, and computing device for lane recognition
CN102013159A (en) High-definition video detection data-based region dynamic origin and destination (OD) matrix acquiring method
CN105046985A (en) Traffic control system for whole segments of main street based on big data
CN106530794A (en) Automatic identification and calibration method of driving road and system thereof
Wang et al. A roadside camera-radar sensing fusion system for intelligent transportation
CN109712401B (en) A method for identifying bottleneck points in composite road network based on floating vehicle trajectory data
CN113011331B (en) Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium
CN103177585A (en) Road turning average travel speed calculating method based on floating car data
CN115523934A (en) A vehicle trajectory prediction method and system based on deep learning
CN109272482A (en) A kind of urban road crossing vehicle queue detection system based on sequence image
US20230033314A1 (en) Method and processor circuit for operating an automated driving function with object classifier in a motor vehicle, as well as the motor vehicle
CN110400461A (en) A road network change detection method
Minnikhanov et al. Detection of traffic anomalies for a safety system of smart city
CN114842660A (en) Unmanned lane track prediction method and device and electronic equipment
Suganuma et al. Current status and issues of traffic light recognition technology in autonomous driving system
CN110610118A (en) Traffic parameter acquisition method and device
US11566912B1 (en) Capturing features for determining routes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190405

RJ01 Rejection of invention patent application after publication