CN116153086B - Multi-section traffic accident and congestion detection method and system based on deep learning - Google Patents
Multi-section traffic accident and congestion detection method and system based on deep learning Download PDFInfo
- Publication number
- CN116153086B CN116153086B CN202310429538.3A CN202310429538A CN116153086B CN 116153086 B CN116153086 B CN 116153086B CN 202310429538 A CN202310429538 A CN 202310429538A CN 116153086 B CN116153086 B CN 116153086B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- congestion
- camera
- vehicles
- traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/065—Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
技术领域technical field
本发明涉及交通控制相关技术领域,具体的说,是涉及一种基于深度学习的多路段交通事故及拥堵检测方法及系统。The present invention relates to the technical field related to traffic control, in particular to a deep learning-based multi-section traffic accident and congestion detection method and system.
背景技术Background technique
随着人们出行需求量的不断增大,高速道路建设越来越多,传统人工手段对高速道路交通拥堵及事故识别的处理工作量和难度也逐渐增大,传统手段主要通过电话接听或人为查看监控录像的方式进行检测,人工查看的工作量和难度巨大,耗费大量的人力成本。即使目前一些方案是使用人工智能自动检测的手段,受限于计算机性能以及计算机硬件限制,识别效率也会比较低,想要提高识别效率,也需要投入更多的GPU服务器才能实现,增大了建设的成本,面对大规模的监控摄像头数量,检测也会比较慢,不能保证实时性。With the continuous increase of people's travel demand and the construction of more and more expressways, the workload and difficulty of traditional manual methods to deal with expressway traffic congestion and accident identification are also gradually increasing. Traditional methods mainly answer by telephone or manually check Detection is done by way of surveillance video, and the workload and difficulty of manual viewing are huge, which consumes a lot of labor costs. Even if some current solutions use artificial intelligence automatic detection methods, the recognition efficiency will be relatively low due to the limitations of computer performance and computer hardware. To improve the recognition efficiency, it is necessary to invest more GPU servers to achieve it, which increases the The cost of construction, in the face of a large number of surveillance cameras, detection will be relatively slow, and real-time performance cannot be guaranteed.
发明内容Contents of the invention
本发明为了解决上述问题,提出了一种基于深度学习的多路段交通事故及拥堵检测方法及系统,能够采用少量的服务器资源对海量交通监控数据进行处理,并且能够准确识别道路的拥堵方向。In order to solve the above problems, the present invention proposes a multi-section traffic accident and congestion detection method and system based on deep learning, which can process massive traffic monitoring data with a small amount of server resources, and can accurately identify the direction of road congestion.
为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
一个或多个实施例提供了基于深度学习的多路段交通事故及拥堵检测方法,包括如下步骤:One or more embodiments provide the deep learning-based multi-section traffic accident and congestion detection method, comprising the following steps:
采用轮询机制以及拥堵预判断依次访问摄像头,获取摄像头采集的车辆图像;Use the polling mechanism and congestion pre-judgment to access the camera in turn to obtain the vehicle images collected by the camera;
根据获取的车辆图像,采用训练好的交通事件三分类模型进行车辆检测识别,得到交通拥堵情况;According to the acquired vehicle images, the trained traffic event three-classification model is used for vehicle detection and recognition, and the traffic congestion situation is obtained;
获取交通拥堵路段的车辆图像,采用拥堵事故方向二分类模型进行识别,识别得到车道中间线或隔离带,以及车头方向,确定拥堵事故方向;Obtain the vehicle image of the traffic jam section, use the congestion accident direction binary classification model to identify, identify the middle line of the lane or the isolation zone, and the direction of the front of the vehicle, and determine the congestion accident direction;
所述交通事件三分类模型依次识别图像中车辆、图像中车辆数量以及车辆移动距离,融合车辆数量与车辆移动距离得到交通拥堵情况。The three-classification model of traffic events sequentially identifies the vehicles in the image, the number of vehicles in the image, and the moving distance of the vehicles, and combines the number of vehicles and the moving distance of the vehicles to obtain the traffic congestion situation.
一个或多个实施例提供了基于深度学习的多路段交通事故及拥堵检测系统,包括:One or more embodiments provide a multi-section traffic accident and congestion detection system based on deep learning, including:
摄像头轮询控制模块:被配置为用于采用轮询机制以及拥堵预判断依次访问摄像头,获取摄像头采集的车辆图像;Camera polling control module: configured to use polling mechanism and congestion pre-judgment to sequentially access the camera to obtain vehicle images collected by the camera;
拥堵情况识别模块:被配置为用于根据获取的车辆图像,采用训练好的交通事件三分类模型进行车辆检测识别,得到交通拥堵情况;Congestion situation identification module: configured to use the trained traffic event three-classification model for vehicle detection and recognition based on the acquired vehicle images to obtain traffic congestion conditions;
拥堵事故方向识别模块:被配置为用于获取交通拥堵路段的车辆图像,采用拥堵事故方向二分类模型进行识别,识别得到车道中间线或隔离带,以及车头方向,确定拥堵事故方向;Congestion accident direction recognition module: configured to obtain vehicle images of traffic congestion road sections, use the congestion accident direction binary classification model to identify, identify the middle line of the lane or the isolation zone, and the direction of the front of the vehicle, and determine the congestion accident direction;
所述交通事件三分类模型依次识别图像中车辆、图像中车辆数量以及车辆移动距离,融合车辆数量与车辆移动距离得到交通拥堵情况。The three-classification model of traffic events sequentially identifies the vehicles in the image, the number of vehicles in the image, and the moving distance of the vehicles, and combines the number of vehicles and the moving distance of the vehicles to obtain the traffic congestion situation.
一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成上述方法所述的步骤。An electronic device includes a memory, a processor, and computer instructions stored in the memory and run on the processor. When the computer instructions are executed by the processor, the steps described in the above methods are completed.
一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成上述方法所述的步骤。A computer-readable storage medium is used for storing computer instructions, and when the computer instructions are executed by a processor, the steps described in the above method are completed.
与现有技术相比,本发明的有益效果为:Compared with prior art, the beneficial effect of the present invention is:
本发明中,创新性地采用摄像头轮询的机制,迅速识别海量监控数据当中的拥堵及事故事件,仅需少量GPU服务器,即可实现对高速路段所有摄像头的实时检测,既能保障检测的效率,有能保障检测的精度,解决使用拥堵事故检测服务需要大量服务器资源的问题。并且创新性地采用来车方向、去车方向与隔离带左右侧两个条件,能够准确判断图像中出现拥堵或事故事件的道路。In the present invention, the camera polling mechanism is innovatively used to quickly identify congestion and accident events in massive monitoring data, and only a small number of GPU servers are needed to realize real-time detection of all cameras on the high-speed road section, which can ensure the efficiency of detection , can guarantee the accuracy of detection, and solve the problem that a large amount of server resources are required to use the congestion accident detection service. In addition, the two conditions of incoming vehicle direction, outgoing vehicle direction and the left and right sides of the isolation belt are innovatively used to accurately determine the road where congestion or accidents occur in the image.
本发明的优点以及附加方面的优点将在下面的具体实施例中进行详细说明。Advantages of the present invention, as well as advantages of additional aspects, will be described in detail in the following specific examples.
附图说明Description of drawings
构成本发明的一部分的说明书附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的限定。The accompanying drawings constituting a part of the present invention are used to provide a further understanding of the present invention, and the schematic embodiments of the present invention and their descriptions are used to explain the present invention, but not to limit the present invention.
图1是本发明实施例1的多路段交通事故及拥堵检测方法的流程图;Fig. 1 is the flow chart of the multi-section traffic accident and congestion detection method of embodiment 1 of the present invention;
图2是本发明实施例1的多路段交通事故及拥堵检测流程示意图;2 is a schematic diagram of a multi-section traffic accident and congestion detection process in Embodiment 1 of the present invention;
图3是本发明实施例1的拥堵方向识别示例的道路示意图。Fig. 3 is a road schematic diagram of an example of congestion direction recognition in Embodiment 1 of the present invention.
具体实施方式Detailed ways
下面结合附图与实施例对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.
应该指出,以下详细说明都是示例性的,旨在对本发明提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本发明所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the present invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本发明的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。需要说明的是,在不冲突的情况下,本发明中的各个实施例及实施例中的特征可以相互组合。下面将结合附图对实施例进行详细描述。It should be noted that the terminology used here is only for describing specific embodiments, and is not intended to limit exemplary embodiments according to the present invention. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and/or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and/or combinations thereof. It should be noted that, in the case of no conflict, various embodiments and features in the embodiments of the present invention can be combined with each other. The embodiments will be described in detail below in conjunction with the accompanying drawings.
实施例1Example 1
在一个或多个实施方式公开的技术方案中,如图1至图3所示,基于深度学习的多路段交通事故及拥堵检测方法,包括如下步骤:In the technical solution disclosed in one or more embodiments, as shown in Figures 1 to 3, the multi-section traffic accident and congestion detection method based on deep learning includes the following steps:
步骤1、采用轮询机制以及拥堵预判断依次访问摄像头,获取摄像头采集的车辆图像;Step 1. Use the polling mechanism and congestion pre-judgment to access the camera in turn to obtain the vehicle images collected by the camera;
步骤2、根据获取的车辆图像,采用训练好的交通事件三分类模型进行车辆检测识别,得到交通拥堵情况;Step 2. According to the acquired vehicle image, use the trained traffic event three-classification model to detect and identify the vehicle to obtain the traffic congestion situation;
步骤3、获取交通拥堵路段的车辆图像,采用拥堵事故方向二分类模型进行识别,识别得到车道中间线或隔离带,以及车头方向,确定拥堵事故方向。Step 3. Obtain the vehicle image of the traffic jam road section, and use the congestion accident direction binary classification model to identify, identify the middle line of the lane or the isolation zone, and the direction of the front of the vehicle, and determine the congestion accident direction.
所述交通事件三分类模型进行车辆检测包括依次识别图像中车辆、图像中车辆数量以及车辆移动距离,融合车辆数量、车辆移动距离得到交通拥堵情况。The vehicle detection by the three-classification model of traffic events includes sequentially identifying the vehicles in the image, the number of vehicles in the image, and the moving distance of the vehicles, and fusing the number of vehicles and the moving distance of the vehicles to obtain traffic congestion.
本实施例中,创新性地采用摄像头轮询机制迅速识别海量监控数据当中的拥堵及事故事件,仅需少量GPU服务器,即可实现对高速路段所有摄像头的实时检测,既能保障检测的效率,有能保障检测的精度,解决使用拥堵事故检测服务需要大量服务器资源的问题。其中,采用2至4台GPU服务器,检测摄像头的数量可以达到千数量级。并且,创新性地采用来车方向、去车方向与隔离带左右侧等条件,能够准确判断图像中哪条道路出现拥堵或事故事件。In this embodiment, the camera polling mechanism is innovatively used to quickly identify congestion and accident events in a large amount of monitoring data. Only a small number of GPU servers are needed to realize real-time detection of all cameras on the high-speed road section, which can ensure the efficiency of detection. It can guarantee the accuracy of detection and solve the problem that a large amount of server resources are required to use the congestion accident detection service. Among them, using 2 to 4 GPU servers, the number of detection cameras can reach the order of thousands. In addition, the innovative use of conditions such as the direction of incoming traffic, the direction of outgoing traffic, and the left and right sides of the barrier can accurately determine which road in the image is congested or has an accident.
步骤1中,采用轮询机制依次访问摄像头的方法采用二分法,预测每帧图像中道路拥堵的可能性,当预测当前摄像头的摄像区域为畅通,切换到下一摄像头进行图像获取;否则,当预测当前摄像头的摄像区域为拥堵,继续获取当前摄像头的车辆图像数据。即:先轮询访问摄像头,判断当前摄像头采集的图像区域是不是拥堵,不拥堵切换至下一摄像头轮询,拥堵就继续采集当前摄像头图像。In step 1, the polling mechanism is used to access the cameras sequentially, and the dichotomy method is used to predict the possibility of road congestion in each frame of image. Predict that the camera area of the current camera is congested, and continue to acquire the vehicle image data of the current camera. That is: poll the access camera first, judge whether the image area collected by the current camera is congested, switch to the next camera poll if not congested, and continue to collect the current camera image if congested.
具体的,预测每帧图像中道路拥堵的可能性,预测方法为识别画面中车辆的数量,根据车辆的数量设定畅通自信系数的数值,根据畅通自信系数的数值的大小判断是否畅通。Specifically, the possibility of road congestion in each frame of image is predicted. The prediction method is to identify the number of vehicles in the picture, set the value of the smoothness confidence coefficient according to the number of vehicles, and judge whether it is smooth according to the value of the smoothness confidence coefficient.
将道路是否畅通通过畅通自信系数σ1标记,由于使用二分法,其标记为拥堵的自信系数为σ2,且遵循σ1+σ2=1。Whether the road is clear or not is marked by the smooth confidence coefficient σ1. Due to the use of dichotomy, the confidence coefficient marked as congestion is σ2, and follows σ1+σ2=1.
一种具体的实现方式,畅通自信系数σ1的计算公式可以如下:A specific implementation method, the calculation formula of the unimpeded confidence coefficient σ1 can be as follows:
0<sum(车辆数量)≤A: σ1=0.9;0<sum(number of vehicles)≤A: σ1=0.9;
A<sum(车辆数量)≤B: σ1=0.7;A<sum (number of vehicles)≤B: σ1=0.7;
B<sum(车辆数量)≤C: σ1=0.4;B<sum (number of vehicles)≤C: σ1=0.4;
C<sum(车辆数量): σ1=0.2;C<sum(number of vehicles): σ1=0.2;
其中,A<B<C为设定值。Among them, A<B<C is the set value.
可实现的一种技术方案,车辆数量小于等于5,畅通自信系数标记为0.9;如果车辆数量大于5并小于等于15,畅通自信系数标记为0.7;车辆数量大于15并小于等于20,畅通自信系数标记为0.4,车辆数量大于20,畅通自信系数标记为0.2。A technical solution that can be realized, if the number of vehicles is less than or equal to 5, the unimpeded confidence coefficient is marked as 0.9; if the number of vehicles is greater than 5 and less than or equal to 15, the unimpeded confidence coefficient is marked as 0.7; It is marked as 0.4, the number of vehicles is greater than 20, and the unimpeded confidence coefficient is marked as 0.2.
在一些实施例中,设定畅通自信系数的第一阈值与第二阈值,第一阈值大于第二阈值;In some embodiments, a first threshold and a second threshold of the unimpeded confidence coefficient are set, and the first threshold is greater than the second threshold;
如果计算得到的畅通自信系数不小于设定的第一阈值,则该摄像头的摄像区域的道路直接标记为畅通,直接切换下一摄像头画面进行分析;畅通自信系数处于第一阈值与第二阈值之间,继续获取当前摄像头的第一设定帧数图像数据计算畅通自信系数的平均值,根据畅通自信系数的平均值确定当前道路是否拥堵,并切换到下一摄像头进行图像采集;畅通自信系数不大于第二阈值,继续获取当前摄像头的第二设定帧数图像数据计算畅通自信系数的平均值,根据畅通自信系数的平均值确定当前道路是否拥堵,并切换到下一摄像头进行图像采集;If the calculated unobstructed confidence coefficient is not less than the set first threshold, the road in the camera area of the camera is directly marked as unimpeded, and the next camera image is directly switched for analysis; the unimpeded confidence coefficient is between the first threshold and the second threshold During the period, continue to obtain the first set frame number image data of the current camera to calculate the average value of the unimpeded confidence coefficient, determine whether the current road is congested according to the average value of the unimpeded confidence coefficient, and switch to the next camera for image acquisition; the unimpeded confidence coefficient is not Greater than the second threshold, continue to obtain the second set frame number image data of the current camera to calculate the average value of the unimpeded confidence coefficient, determine whether the current road is congested according to the average value of the unimpeded confidence coefficient, and switch to the next camera for image acquisition;
其中,第一设定帧数大于第二设定帧数。Wherein, the first set frame number is greater than the second set frame number.
具体的,本实施例中,畅通自信系数第一阈值可以设置为0.9,第二阈值可以设置为0.5。Specifically, in this embodiment, the first threshold of the unimpeded confidence coefficient may be set to 0.9, and the second threshold may be set to 0.5.
如果当前摄像头畅通自信系数σ1≥0.9,则该摄像头的道路直接标记为畅通,直接切换到下一个摄像头画面分析。If the current camera is unblocked and the confidence coefficient σ1≥0.9, the road of this camera is directly marked as unblocked, and the image analysis of the next camera is directly switched.
若该摄像头畅通自信系数0.9>σ1>0.5,则该图像分析出来的畅通自信系数不高,此时,不再切换到下一个摄像头画面,继续连续获取该摄像头的连续20帧画面,将20张图片的畅通自信系数加和求平均,将该平均数标记为,若/>,则将其定义为畅通,若,则将其定义为拥堵。If the camera’s unblocked confidence coefficient is 0.9>σ1>0.5, then the unobstructed confidence coefficient obtained from the image analysis is not high. The smooth confidence coefficients of the pictures are summed and averaged, and the average is marked as , if /> , it is defined as unimpeded, if , it is defined as congestion.
若该摄像头畅通自信系数σ1≤0.5,则该图像分析出来的畅通自信系数极低,此时,不再切换到下一个摄像头画面,继续连续获取该摄像头的连续10帧画面,每个分析其畅通自信系数,将这10张图片的畅通自信系数加和求平均,将该平均数标记为,若/>,则将其定义为畅通,若/>,则将其定义为拥堵。If the camera’s unimpeded confidence coefficient σ1≤0.5, the unimpeded confidence coefficient obtained from the image analysis is extremely low. Confidence coefficient, the smooth confidence coefficient of these 10 pictures is summed and averaged, and the average is marked as , if /> , it is defined as unimpeded, if /> , it is defined as congestion.
本实施例中,针对畅通自信系数设置了两个区间,其中,不小于设定的畅通自信系数第一阈值的区间为确定为畅通的区间;畅通自信系数处于第一阈值与第二阈值之间的区间为可能畅通的区间,对该区间进行重点监测,以实现监测的准确性。In this embodiment, two intervals are set for the unimpeded self-confidence coefficient, wherein the interval not less than the set first threshold of the unimpeded self-confidence coefficient is the interval determined to be unobstructed; the unimpeded self-confidence coefficient is between the first threshold and the second threshold The interval of is the interval that may be unblocked, and this interval should be monitored in order to achieve the accuracy of monitoring.
步骤2中,采用交通事件三分类模型,对获取的车辆图像进行车辆检测识别,得到交通拥堵情况。In step 2, the three-classification model of traffic events is used to detect and recognize the acquired vehicle images to obtain traffic congestion.
具体的,交通事件三分类模型可以采用Caffe通用深度学习框架,用于分类畅通、拥堵以及发生了事故;Specifically, the three-classification model of traffic events can use the Caffe general deep learning framework to classify traffic flow, congestion, and accidents;
交通事件三分类模型包括依次连接的车辆识别网络、车辆计数模块、车辆移动识别模块以及融合输出模块;The three-classification model of traffic events includes sequentially connected vehicle identification network, vehicle counting module, vehicle movement identification module and fusion output module;
车辆识别网络,用于识别车辆图像中的车辆,并采用目标框框选车辆目标;The vehicle recognition network is used to recognize the vehicle in the vehicle image, and use the target frame to select the vehicle target;
车辆计数模块,用于对车辆识别网络框选的车辆目标框进行计数;The vehicle counting module is used to count the vehicle target frames selected by the vehicle recognition network frame;
车辆移动识别模块,用于根据车辆识别网络框选的车辆目标框,识别相邻帧图像中的同一车辆移动距离;The vehicle movement recognition module is used to identify the movement distance of the same vehicle in adjacent frame images according to the vehicle target frame selected by the vehicle recognition network;
融合输出模块,用于根据车辆移动位移判断拥堵情况,判断是否发生事故,或/和对拥堵里程进行识别。The fusion output module is used for judging the congestion situation according to the vehicle displacement, judging whether an accident occurs, or/and identifying the congestion mileage.
在一些实施例中,车辆识别网络通过检测算法,实现各种类型车辆的识别和检测,建立车辆识别检测模型,用于识别车辆数量及位置。车辆识别网络采用Caffe通用深度学习框架,车辆识别网络的训练过程如下:In some embodiments, the vehicle identification network realizes the identification and detection of various types of vehicles through detection algorithms, and establishes a vehicle identification and detection model for identifying the number and location of vehicles. The vehicle recognition network adopts the Caffe general deep learning framework, and the training process of the vehicle recognition network is as follows:
21)获取摄像机采集的视频数据,通过视频编解码模块解析成图像数据作为数据集;21) Obtain the video data collected by the camera, and parse it into image data as a data set through the video codec module;
其中,数据集中包含不同类型的车辆。Among them, the dataset contains different types of vehicles.
22)对数据集的图像中的各类型车辆进行标记;22) Mark various types of vehicles in the images of the dataset;
具体的,可以选取10000张图像,对图像中的各类型车辆进行标记。Specifically, 10,000 images may be selected to mark various types of vehicles in the images.
23)基于标记数据提取车辆的特性,并对车辆的特征正则化;23) Extract the characteristics of the vehicle based on the labeled data, and regularize the characteristics of the vehicle;
24)针对特征正则化后的数据,划分车辆识别网络的训练集和验证集;24) For the data after feature regularization, divide the training set and verification set of the vehicle recognition network;
25)采用Caffe通用深度学习框架构建车辆识别网络;25) Use the Caffe general deep learning framework to build a vehicle recognition network;
26)训练车辆识别网络,识别出每个图像当中的车辆信息,训练后得到网络参数;26) Train the vehicle recognition network, identify the vehicle information in each image, and obtain the network parameters after training;
27)基于验证集的图像对训练后的车辆识别网络进行测试,直到满足精度要求。27) Test the trained vehicle recognition network based on the images of the validation set until the accuracy requirement is met.
车辆计数模块,被配置为对图像中的车辆进行计数,具体的,对车辆识别网络输出的识别结果,设立计数器sum,计数器初始值设为0,每检测出1个车辆,计数器的数值加1,即sum=sum+1;识别完成后,sum值即为该图像中车辆的数量。该模块输出的车辆数量可以用于畅通自信系数计算。The vehicle counting module is configured to count the vehicles in the image. Specifically, for the recognition results output by the vehicle recognition network, a counter sum is set up, and the initial value of the counter is set to 0. Every time a vehicle is detected, the value of the counter is increased by 1 , that is, sum=sum+1; after the recognition is completed, the sum value is the number of vehicles in the image. The number of vehicles output by this module can be used for the calculation of the smooth flow confidence coefficient.
车辆移动识别模块,用于识别车辆是否移动,可以识别相邻两帧图像中同一车辆移动的距离;具体的,识别方法可以如下:The vehicle movement identification module is used to identify whether the vehicle is moving, and can identify the moving distance of the same vehicle in two adjacent frames of images; specifically, the identification method can be as follows:
2.1)对当前的摄像头,按照设定的时间间隔获取视频相邻帧,提取视频图像数据;2.1) For the current camera, according to the set time interval, the adjacent frames of the video are obtained, and the video image data is extracted;
可选的,设定的时间间隔可以设置为几秒,优选的,可以设置为1秒。Optionally, the set time interval can be set to several seconds, preferably, it can be set to 1 second.
2.2)检测相邻两帧图像,车辆目标框相同大小的长方形的像素位移,即为车辆的移动位移值,根据间隔时间可以获得移动速度;2.2) Detect two adjacent frames of images, the pixel displacement of the rectangle with the same size as the vehicle target frame is the vehicle’s movement displacement value, and the movement speed can be obtained according to the interval time;
在一些实施例中,融合输出模块实现拥堵以及事故的识别,包括如下步骤:In some embodiments, the fusion output module realizes the identification of congestion and accidents, including the following steps:
211)设定车辆移动距离的临界值x;211) Set the critical value x of the moving distance of the vehicle;
若位移小于设定的临界值x,则可认定该车辆缓慢移动,若位移为零则静止;If the displacement is less than the set critical value x, it can be considered that the vehicle is moving slowly, and if the displacement is zero, it is stationary;
212)对每个车辆求位移值,若超过设定比例的车辆小于临界值x,则判定为拥堵;212) Calculate the displacement value of each vehicle, if the vehicle exceeding the set ratio is less than the critical value x, it is judged as congestion;
可选的,设定比例可以为70%,超过70%的车辆移动距离小于x,则为拥堵。Optionally, the setting ratio can be 70%, and if the moving distance of more than 70% of the vehicles is less than x, it is congestion.
213)根据时间序列,检测车辆位置的车辆目标框,设定数量的车辆停止不运动,则判定为可能发生事故;设定数量的车辆停止并且识别为拥堵,判定为发生事故;213) According to the time series, detect the vehicle target frame of the vehicle position, if the set number of vehicles stops and does not move, it is determined that an accident may occur; if the set number of vehicles stops and is identified as congestion, it is determined that an accident has occurred;
进一步地,拥堵里程识别,具体的通过拥堵摄像头的距离进行测算,通过摄像头的之间的距离计算拥堵长度,具体的,步骤如下:Further, the congestion mileage recognition is specifically measured by the distance of the congestion cameras, and the congestion length is calculated by the distance between the cameras. Specifically, the steps are as follows:
21.1)针对检测到拥堵的摄像头,获取摄像头的坐标信息;21.1) For the camera that detects congestion, obtain the coordinate information of the camera;
21.2)遍历拥堵摄像头邻近的摄像头,得到所有连续拥堵的摄像头;21.2) Traverse the cameras adjacent to the congested camera to get all consecutive congested cameras;
21.3)针对连续拥堵的摄像头,计算两两拥堵摄像头之间的距离,距离和即为拥堵里程。21.3) For cameras that are congested continuously, calculate the distance between two cameras that are congested, and the sum of the distances is the congestion mileage.
拥堵摄像头即为检测到拥堵的摄像头。根据摄像头的坐标的设置规则,根据坐标信息计算摄像头的间距,摄像头在高速道路管理系统里都有登记的坐标信息,如某一摄像头的编号为G35 TV99 K207+340,其中K207+340即代表的位置信息。A congestion camera is a camera that detects congestion. According to the setting rules of the camera coordinates, the distance between the cameras is calculated according to the coordinate information. The cameras have registered coordinate information in the highway management system. For example, the number of a certain camera is G35 TV99 K207+340, where K207+340 represents location information.
通过两个摄像头位置信息,计算出两个摄像机之间的间距,如另外一个摄像头编号G35 TV99 K209+342,则这两个摄像头之间的距离为(209-207)*1K+(342-340),为2002米。Calculate the distance between the two cameras based on the location information of the two cameras. For example, if another camera number is G35 TV99 K209+342, the distance between the two cameras is (209-207)*1K+(342-340) , for 2002 m.
可选的,对检测到拥堵的摄像头标记为BUSYCAM,向前及向后遍历与其邻近的摄像头,若邻近的摄像头也是拥堵,则继续遍历,直到下一个摄像机未检测到拥堵;将该组遍历标记为拥堵的摄像头,记录该组所有摄像头的所在位置,获取位置的最大值和最小值,最大值最小值的差值即为拥堵摄像头的距离,也就是拥堵的长度。Optionally, mark the camera that detects congestion as BUSYCAM, and traverse its adjacent cameras forward and backward. If the adjacent camera is also congested, continue traversing until the next camera does not detect congestion; mark the group of traversals For the congested cameras, record the locations of all the cameras in the group, obtain the maximum and minimum values of the positions, and the difference between the maximum and minimum values is the distance of the congested cameras, that is, the length of the congestion.
传统的AI拥堵检测,仅能识别该画面中存在拥堵,无法判断拥堵的方向,隔离带任何一侧发生拥堵都会认定为该摄像头所在的路段发生拥堵,但无法确定是隔离带的哪一侧,即具体哪条道路发生拥堵。另外现有监控摄像是球机,摄像头会随时旋转,更增大了检测的难度。Traditional AI congestion detection can only identify the congestion in the screen, but cannot judge the direction of the congestion. Congestion on any side of the isolation zone will be identified as congestion on the road section where the camera is located, but it cannot be determined which side of the isolation zone. That is, which specific road is congested. In addition, the existing monitoring camera is a ball camera, and the camera will rotate at any time, which increases the difficulty of detection.
步骤3中,拥堵事故方向二分类模型,用于分类拥堵路段的左侧道路发生了拥堵还是右侧道路发生了拥堵,具体的,被配置为执行以下过程:In step 3, the two-classification model of the direction of the congestion accident is used to classify whether the left road or the right road of the congested section is congested. Specifically, it is configured to perform the following process:
31)采用Faster-RCNN对隔离带或者中间线进行识别;31) Use Faster-RCNN to identify the isolation zone or the middle line;
Faster-RCNN是目标检测算法,在Fast-RCNN基础上提出了RPN候选框生成算法,使得目标检测速度大大提高。Faster-RCNN is a target detection algorithm. On the basis of Fast-RCNN, an RPN candidate frame generation algorithm is proposed, which greatly improves the target detection speed.
32)将图像以识别出的隔离带或者中间线进行分割,得到两张新图;32) Segment the image with the identified isolation zone or middle line to obtain two new images;
可选的,可以采用OpenCV从隔离带将图像进行分割;Optionally, OpenCV can be used to segment the image from the isolation zone;
OpenCV(Open source Computer Vision Library,开放源代码计算机视觉库)是一套关于计算机视觉的开放源代码的API函数库。OpenCV (Open source Computer Vision Library, open source computer vision library) is a set of open source API function libraries about computer vision.
33)针对分割后的每张图像,根据图像中的车辆数量计算畅通自信系数,判断图片对应的道路是否拥堵;33) For each segmented image, calculate the smoothness confidence coefficient according to the number of vehicles in the image, and judge whether the road corresponding to the image is congested;
34)识别图像中的车头方向,确定拥堵的道路的方向。34) Identify the direction of the vehicle head in the image and determine the direction of the congested road.
畅通自信系数的计算方法与步骤1中的计算方法相同,此处不再赘述。The calculation method of the unimpeded confidence coefficient is the same as the calculation method in step 1, and will not be repeated here.
可选的,可以通过设定阈值判断道路是否畅通,根据畅通自信系数的数值的大小判断是否畅通。判断方法与前述相同,具体的示例,σ1≥0.9,图像对应的区域为畅通,任何一侧自信系数0.5<σ1<0.9均继续连续取该摄像头的20帧图像做进一步分析,然后如上述做平均值,求出平均值,若/>,则将其定义为畅通,若/>,则将其定义为拥堵。如果仅左侧自信系数/>,则判定为左侧拥堵,如果仅右侧自信系数/>,则判定为右侧拥堵。若左右侧自信系数都/>,则判定为双向拥堵。Optionally, it is possible to judge whether the road is smooth by setting a threshold, and judge whether it is smooth according to the value of the smoothness confidence coefficient. The judgment method is the same as above. For a specific example, if σ1≥0.9, the area corresponding to the image is unblocked. If the confidence coefficient of any side is 0.5<σ1<0.9, continue to continuously take 20 frames of images from the camera for further analysis, and then average them as above value, find the average , if /> , it is defined as unimpeded, if /> , it is defined as congestion. If only the left confidence factor /> , it is judged as congestion on the left side, if only the confidence coefficient on the right side /> , it is judged as congestion on the right side. If both left and right confidence coefficients are , it is judged as two-way congestion.
拥堵事故方向二分类模型包括中间线或者隔离带识别模块,车头识别模块以及事故方向判定模块。The two-category model of congestion accident direction includes a middle line or isolation zone identification module, a vehicle front identification module and an accident direction determination module.
本实施例创新性采用中间线或者中间隔离带以及车头方向检测的方式,检测出来监控中两条道路的具体哪条路发生拥堵。This embodiment innovatively adopts the middle line or the middle isolation belt and the way of detecting the direction of the front of the vehicle to detect which of the two monitored roads is congested.
中间线或者隔离带识别模块,具体的,被配置为采用Faster-RCNN算法识别提取中间的隔离带或者中间线;根据识别出的中间的隔离带或者中间线识别是道路的左侧拥堵还是右侧拥堵。The intermediate line or median line identification module is specifically configured to use the Faster-RCNN algorithm to identify and extract the intermediate isolation zone or intermediate line; to identify whether the identified intermediate isolation zone or intermediate line is congested on the left side of the road or on the right side congestion.
以隔离带为例,高速隔离带的识别方法,采取图像特征识别方式,高速上道路为灰黑色,中间隔离带为绿色,中间护栏多为银色或绿色,两侧与中间有明显的颜色界限区分,可根据颜色变化采用Faster-RCNN算法识别出隔离带。Taking the isolation belt as an example, the identification method of the high-speed isolation belt adopts the image feature recognition method. The road on the expressway is gray and black, the middle isolation belt is green, and the middle guardrail is mostly silver or green. There are obvious color boundaries between the two sides and the middle. , the isolation zone can be identified by using the Faster-RCNN algorithm according to the color change.
车头识别模块:被配置为根据摄像头采集的相邻帧的图像,采用Faster-RCNN算法识别车头朝向,进而识别是来车方向还是去车方向。Vehicle front recognition module: it is configured to use the Faster-RCNN algorithm to identify the direction of the vehicle head according to the images of adjacent frames collected by the camera, and then identify the direction of the incoming vehicle or the direction of the outgoing vehicle.
可选的,车头朝向判断方法可以为:按照时间先后顺序,若相邻两帧图像中的同一车辆的轮廓越来越大,则图像中的车为车头,若轮廓越来越小,则为车尾。Optionally, the method for judging the orientation of the vehicle head can be as follows: in chronological order, if the outline of the same vehicle in two adjacent frames of images is getting larger and larger, then the car in the image is the front of the vehicle, and if the outline is getting smaller and smaller, then it is rear.
事故方向判定模块,被配置为根据当前摄像头的朝向,识别出的车道左侧拥堵还是右侧拥堵,以及车头方向确定拥堵事故方向。The accident direction judging module is configured to determine the direction of the congestion accident according to the current orientation of the camera, whether the identified lane is congested on the left or right, and the direction of the front of the vehicle.
具体的识别示例如图3所示,若隔离带左侧拥堵,且是来车方向,则为道路1拥堵,若摄像头旋转180°,摄像头识别为右侧拥堵,且是去车方向,同样能判断出来时道路1拥堵。The specific recognition example is shown in Figure 3. If there is congestion on the left side of the isolation strip, and it is in the direction of incoming traffic, it is congested on road 1. If the camera rotates 180°, the camera recognizes that it is congested on the right side, and it is in the direction of incoming traffic. When it is judged, the road 1 is congested.
进一步地技术方案,还包括获取工作人员对系统给出的交通事故或拥堵数据的反馈数据,对事故或拥堵判断的偏差进行修正。具体的,构建通过偏差数据集合,继续强化训练模型以增强事故或拥堵识别的准确性。在初始阶段,人工参与,根据机器评分结果,进行核验;人工调整拥堵及事故概率;根据调整的得分,重新修正模型,提升模型事故预测效果。A further technical solution also includes obtaining the staff's feedback data on the traffic accident or congestion data given by the system, and correcting the deviation of accident or congestion judgment. Specifically, build a biased data set and continue to strengthen the training model to enhance the accuracy of accident or congestion recognition. In the initial stage, human participation is performed, and verification is performed based on machine scoring results; congestion and accident probabilities are manually adjusted; and the model is re-corrected based on the adjusted scores to improve the accident prediction effect of the model.
本实施例通过分类算法,实现对道路正常行驶、事故、拥堵的分类,并能够实现了拥堵里程识别以及拥堵方向的识别,能够实现更精细的道路情况识别,同时采用轮询机制实现了高效检测,提高了检测的实时性。This embodiment realizes the classification of normal driving, accidents, and congestion on the road through the classification algorithm, and can realize the identification of the congestion mileage and the identification of the congestion direction, and can realize more detailed road condition identification. At the same time, the polling mechanism is used to achieve efficient detection. , which improves the real-time performance of detection.
实施例2Example 2
基于实施例1,本实施例中提供基于深度学习的多路段交通事故及拥堵检测系统,包括:Based on embodiment 1, a multi-section traffic accident and congestion detection system based on deep learning is provided in this embodiment, including:
摄像头轮询控制模块:被配置为用于采用轮询机制以及拥堵预判断依次访问摄像头,获取摄像头采集的车辆图像;Camera polling control module: configured to use polling mechanism and congestion pre-judgment to sequentially access the camera to obtain vehicle images collected by the camera;
拥堵情况识别模块:被配置为用于根据获取的车辆图像,采用训练好的交通事件三分类模型进行车辆检测识别,得到交通拥堵情况;Congestion situation identification module: configured to use the trained traffic event three-classification model for vehicle detection and recognition based on the acquired vehicle images to obtain traffic congestion conditions;
拥堵事故方向识别模块:被配置为用于获取交通拥堵路段的车辆图像,采用拥堵事故方向二分类模型进行识别,识别得到车道中间线或隔离带,以及车头方向,确定拥堵事故方向;Congestion accident direction recognition module: configured to obtain vehicle images of traffic congestion road sections, use the congestion accident direction binary classification model to identify, identify the middle line of the lane or the isolation zone, and the direction of the front of the vehicle, and determine the congestion accident direction;
所述交通事件三分类模型依次识别图像中车辆、图像中车辆数量以及车辆移动距离,融合车辆数量与车辆移动距离得到交通拥堵情况。The three-classification model of traffic events sequentially identifies the vehicles in the image, the number of vehicles in the image, and the moving distance of the vehicles, and combines the number of vehicles and the moving distance of the vehicles to obtain the traffic congestion situation.
此处需要说明的是,本实施例中的各个模块与实施例1中的各个步骤一一对应,其具体实施过程相同,此处不再累述。What needs to be explained here is that each module in this embodiment corresponds to each step in Embodiment 1 one by one, and the specific implementation process is the same, which will not be repeated here.
实施例3Example 3
本实施例提供一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成实施例1的方法所述的步骤。This embodiment provides an electronic device, including a memory, a processor, and computer instructions stored in the memory and executed on the processor. When the computer instructions are executed by the processor, the steps described in the method in Embodiment 1 are completed.
实施例4Example 4
本实施例提供一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成实施例1的方法所述的步骤。This embodiment provides a computer-readable storage medium for storing computer instructions, and when the computer instructions are executed by a processor, the steps described in the method in Embodiment 1 are completed.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
上述虽然结合附图对本发明的具体实施方式进行了描述,但并非对本发明保护范围的限制,所属领域技术人员应该明白,在本发明的技术方案的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或变形仍在本发明的保护范围以内。Although the specific implementation of the present invention has been described above in conjunction with the accompanying drawings, it is not a limitation to the protection scope of the present invention. Those skilled in the art should understand that on the basis of the technical solution of the present invention, those skilled in the art do not need to pay creative work Various modifications or variations that can be made are still within the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310429538.3A CN116153086B (en) | 2023-04-21 | 2023-04-21 | Multi-section traffic accident and congestion detection method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310429538.3A CN116153086B (en) | 2023-04-21 | 2023-04-21 | Multi-section traffic accident and congestion detection method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116153086A CN116153086A (en) | 2023-05-23 |
CN116153086B true CN116153086B (en) | 2023-07-18 |
Family
ID=86354664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310429538.3A Expired - Fee Related CN116153086B (en) | 2023-04-21 | 2023-04-21 | Multi-section traffic accident and congestion detection method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116153086B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7458547B1 (en) | 2023-11-07 | 2024-03-29 | 株式会社インターネットイニシアティブ | Information processing device, system and method |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117275236B (en) * | 2023-10-11 | 2024-04-05 | 宁波宁工交通工程设计咨询有限公司 | Traffic jam management method and system based on multi-target recognition |
CN119091621A (en) * | 2024-08-23 | 2024-12-06 | 长沙数智科技集团有限公司 | A cloud platform-based intelligent traffic management system, method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106758615A (en) * | 2016-11-25 | 2017-05-31 | 上海市城市建设设计研究总院 | Improve the method to set up of high-density development section road network traffic efficiency |
CN113936458A (en) * | 2021-10-12 | 2022-01-14 | 中国联合网络通信集团有限公司 | Method, device, equipment and medium for judging congestion of expressway |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2442291B1 (en) * | 2010-10-13 | 2013-04-24 | Harman Becker Automotive Systems GmbH | Traffic event monitoring |
JP2018081504A (en) * | 2016-11-16 | 2018-05-24 | 富士通株式会社 | Traffic control device, traffic control method, and traffic control program |
CN107742418B (en) * | 2017-09-29 | 2020-04-24 | 东南大学 | Automatic identification method for traffic jam state and jam point position of urban expressway |
CN110688922A (en) * | 2019-09-18 | 2020-01-14 | 苏州奥易克斯汽车电子有限公司 | Deep learning-based traffic jam detection system and detection method |
CN111899514A (en) * | 2020-08-19 | 2020-11-06 | 陇东学院 | Artificial intelligence's detection system that blocks up |
CN112907981B (en) * | 2021-03-25 | 2022-03-29 | 东南大学 | Shunting device for shunting traffic jam vehicles at intersection and control method thereof |
-
2023
- 2023-04-21 CN CN202310429538.3A patent/CN116153086B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106758615A (en) * | 2016-11-25 | 2017-05-31 | 上海市城市建设设计研究总院 | Improve the method to set up of high-density development section road network traffic efficiency |
CN113936458A (en) * | 2021-10-12 | 2022-01-14 | 中国联合网络通信集团有限公司 | Method, device, equipment and medium for judging congestion of expressway |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7458547B1 (en) | 2023-11-07 | 2024-03-29 | 株式会社インターネットイニシアティブ | Information processing device, system and method |
Also Published As
Publication number | Publication date |
---|---|
CN116153086A (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116153086B (en) | Multi-section traffic accident and congestion detection method and system based on deep learning | |
WO2021170030A1 (en) | Method, device, and system for target tracking | |
CN105139425B (en) | A kind of demographic method and device | |
CN104616502B (en) | Car license recognition and alignment system based on combination type bus or train route video network | |
WO2017156772A1 (en) | Method of computing passenger crowdedness and system applying same | |
CN114613143B (en) | Road vehicle counting method based on YOLOv3 model | |
CN104361332B (en) | A kind of face eye areas localization method for fatigue driving detection | |
CN103986910A (en) | A method and system for counting passenger flow based on intelligent analysis camera | |
CN105512640A (en) | Method for acquiring people flow on the basis of video sequence | |
CN109191830A (en) | A kind of congestion in road detection method based on video image processing | |
CN101196991A (en) | Method and system for counting dense passenger flow and automatic detection of pedestrian walking speed | |
CN103617410A (en) | Highway tunnel parking detection method based on video detection technology | |
CN109784254A (en) | A kind of method, apparatus and electronic equipment of rule-breaking vehicle event detection | |
CN105844229A (en) | Method and system for calculating passenger crowdedness degree | |
WO2023155482A1 (en) | Identification method and system for quick gathering behavior of crowd, and device and medium | |
CN114648748A (en) | Motor vehicle illegal parking intelligent identification method and system based on deep learning | |
CN118072451B (en) | Community security early warning method and system based on artificial intelligence | |
CN109166336B (en) | A real-time road condition information collection and push method based on blockchain technology | |
CN114299438A (en) | Tunnel parking event detection method integrating traditional parking detection and neural network | |
CN115909223B (en) | A method and system for matching WIM system information with surveillance video data | |
CN112766038B (en) | Vehicle tracking method based on image recognition | |
CN118781827A (en) | A statistical method, system and related equipment for calculating highway congestion level | |
CN116311166A (en) | Traffic obstacle recognition method and device and electronic equipment | |
CN115352454A (en) | An interactive auxiliary safety driving system | |
CN114973169A (en) | Vehicle classification and counting method and system based on multi-target detection and tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20230718 |