CN116153086A - Multi-path traffic accident and congestion detection method and system based on deep learning - Google Patents

Multi-path traffic accident and congestion detection method and system based on deep learning Download PDF

Info

Publication number
CN116153086A
CN116153086A CN202310429538.3A CN202310429538A CN116153086A CN 116153086 A CN116153086 A CN 116153086A CN 202310429538 A CN202310429538 A CN 202310429538A CN 116153086 A CN116153086 A CN 116153086A
Authority
CN
China
Prior art keywords
vehicle
congestion
vehicles
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310429538.3A
Other languages
Chinese (zh)
Other versions
CN116153086B (en
Inventor
巩华良
赵珂
刘伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Expressway Co ltd
Original Assignee
Qilu Expressway Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Expressway Co ltd filed Critical Qilu Expressway Co ltd
Priority to CN202310429538.3A priority Critical patent/CN116153086B/en
Publication of CN116153086A publication Critical patent/CN116153086A/en
Application granted granted Critical
Publication of CN116153086B publication Critical patent/CN116153086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of traffic control, and provides a method and a system for detecting traffic accidents and congestion of multiple sections based on deep learning, wherein the detection method comprises the following steps: sequentially accessing the cameras by adopting a polling mechanism and congestion pre-judgment, and acquiring vehicle images acquired by the cameras; according to the acquired vehicle image, a trained traffic event three-classification model is adopted to carry out vehicle detection and identification, and a traffic jam condition is obtained; acquiring a vehicle image of a traffic jam road section, identifying by adopting a second classification model of the jam accident direction to obtain a lane middle line or a separation zone and a vehicle head direction, and determining the jam accident direction; and the traffic event three-classification model sequentially identifies vehicles in the image, the number of the vehicles in the image and the moving distance of the vehicles, and fuses the number of the vehicles and the moving distance of the vehicles to obtain traffic jam conditions. The detection method can process mass traffic monitoring data by adopting a small amount of server resources, and can accurately control the congestion direction of the road.

Description

Multi-path traffic accident and congestion detection method and system based on deep learning
Technical Field
The invention relates to the technical field of traffic control, in particular to a method and a system for detecting traffic accidents and congestion in multiple sections based on deep learning.
Background
Along with the continuous increase of the travel demand of people, expressway construction is more and more, the processing workload and difficulty of traditional manual means on expressway traffic jam and accident identification are also gradually increased, the traditional means mainly detect through a mode of answering a call or manually checking a monitoring video, the workload and difficulty of manual checking are huge, and a large amount of labor cost is consumed. Even if some schemes at present use artificial intelligence automated inspection, limited by computer performance and computer hardware limitation, recognition efficiency is low, and the recognition efficiency is required to be improved, and more GPU servers are required to be input for realization, so that construction cost is increased, the number of large-scale monitoring cameras is increased, detection is slow, and real-time performance cannot be guaranteed.
Disclosure of Invention
In order to solve the problems, the invention provides a multi-path traffic accident and congestion detection method and system based on deep learning, which can process mass traffic monitoring data by adopting a small amount of server resources and accurately identify the congestion direction of a road.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
one or more embodiments provide a multi-path traffic accident and congestion detection method based on deep learning, including the following steps:
sequentially accessing the cameras by adopting a polling mechanism and congestion pre-judgment, and acquiring vehicle images acquired by the cameras;
according to the acquired vehicle image, a trained traffic event three-classification model is adopted to carry out vehicle detection and identification, and a traffic jam condition is obtained;
acquiring a vehicle image of a traffic jam road section, identifying by adopting a second classification model of the jam accident direction, obtaining a lane middle line or a separation zone and a vehicle head direction by identification, and determining the jam accident direction;
and the traffic event three-classification model sequentially identifies vehicles in the image, the number of the vehicles in the image and the moving distance of the vehicles, and fuses the number of the vehicles and the moving distance of the vehicles to obtain traffic jam conditions.
One or more embodiments provide a deep learning-based multi-path segment traffic accident and congestion detection system, comprising:
the camera polling control module: the system is configured to access the cameras in sequence by adopting a polling mechanism and congestion pre-judgment, and acquire vehicle images acquired by the cameras;
and a congestion condition identification module: the system is configured to be used for carrying out vehicle detection and identification by adopting a trained traffic event three-classification model according to the acquired vehicle image to obtain traffic jam conditions;
a congestion accident direction identification module: the vehicle image acquisition module is configured to acquire a vehicle image of a traffic congestion road section, identify by adopting a congestion accident direction two-classification model, obtain a lane middle line or a separation zone and a vehicle head direction by identification, and determine the congestion accident direction;
and the traffic event three-classification model sequentially identifies vehicles in the image, the number of the vehicles in the image and the moving distance of the vehicles, and fuses the number of the vehicles and the moving distance of the vehicles to obtain traffic jam conditions.
An electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the steps of the method described above.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method described above.
Compared with the prior art, the invention has the beneficial effects that:
in the invention, a camera polling mechanism is innovatively adopted to rapidly identify congestion and accident events in mass monitoring data, and real-time detection of all cameras on a high-speed road section can be realized by only a small amount of GPU servers, so that the detection efficiency can be ensured, the detection precision can be ensured, and the problem that a large amount of server resources are required for using congestion accident detection service is solved. And the two conditions of the coming direction, the going direction and the left side and the right side of the isolation belt are innovatively adopted, so that the road with the congestion or accident event in the image can be accurately judged.
The advantages of the present invention, as well as additional aspects of the invention, will be described in detail in the following detailed examples.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
Fig. 1 is a flowchart of a multi-path traffic accident and congestion detection method according to embodiment 1 of the present invention;
fig. 2 is a schematic diagram of a multi-path traffic accident and congestion detection flow according to embodiment 1 of the present invention;
fig. 3 is a schematic diagram of a road in which an example of congestion direction identification in embodiment 1 of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof. It should be noted that, in the case of no conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The embodiments will be described in detail below with reference to the accompanying drawings.
Example 1
In the technical solutions disclosed in one or more embodiments, as shown in fig. 1 to 3, a method for detecting traffic accidents and congestion in multiple sections based on deep learning includes the following steps:
step 1, sequentially accessing a camera by adopting a polling mechanism and congestion pre-judgment, and acquiring a vehicle image acquired by the camera;
step 2, according to the acquired vehicle image, adopting a trained traffic event three-classification model to carry out vehicle detection and identification to obtain a traffic jam condition;
and step 3, acquiring a vehicle image of the traffic jam road section, identifying by adopting a second classification model of the jam accident direction, obtaining a lane middle line or a separation zone and a vehicle head direction by identification, and determining the jam accident direction.
The traffic event three-classification model carries out vehicle detection and comprises the steps of sequentially identifying vehicles in images, the number of the vehicles in the images and the moving distance of the vehicles, and fusing the number of the vehicles and the moving distance of the vehicles to obtain traffic jam conditions.
In the embodiment, a camera polling mechanism is innovatively adopted to rapidly identify congestion and accident events in mass monitoring data, real-time detection of all cameras on a high-speed road section can be realized by only a small amount of GPU servers, the detection efficiency can be ensured, the detection precision can be ensured, and the problem that a large amount of server resources are required for using congestion accident detection service is solved. The number of the detection cameras can reach thousands of orders by adopting 2 to 4 GPU servers. In addition, conditions such as an incoming direction, an outgoing direction, the left side and the right side of the isolation belt and the like are innovatively adopted, so that the condition of which road in the image is jammed or an accident event can be accurately judged.
In the step 1, a method for sequentially accessing cameras by adopting a polling mechanism adopts a dichotomy, predicts the possibility of road congestion in each frame of image, and switches to the next camera for image acquisition when the image pickup area of the current camera is predicted to be smooth; otherwise, when the image capturing area of the current camera is predicted to be congested, continuously acquiring the vehicle image data of the current camera. Namely: the method comprises the steps of firstly polling an access camera, judging whether an image area acquired by the current camera is jammed, switching to the next camera polling without jamming, and continuously acquiring the current camera image after jamming.
Specifically, the possibility of road congestion in each frame of image is predicted, the prediction method is to identify the number of vehicles in the image, set the value of the unblocked self-confidence coefficient according to the number of the vehicles, and judge whether the vehicles are unblocked according to the value of the unblocked self-confidence coefficient.
Whether the road is clear is marked by a clear confidence coefficient σ1, which is marked as congested due to the use of the dichotomy, is σ2, and follows σ1+σ2=1.
In a specific implementation manner, the calculation formula of the smooth confidence coefficient σ1 may be as follows:
0< sum (number of vehicles) is less than or equal to A, sigma 1=0.9;
a < sum (number of vehicles) is less than or equal to B, sigma 1=0.7;
b < sum (number of vehicles) is less than or equal to C, sigma 1=0.4;
c < sum (number of vehicles): σ1=0.2;
wherein A < B < C is a set value.
According to the technical scheme, the number of vehicles is less than or equal to 5, and the smooth self-confidence coefficient is marked as 0.9; if the number of vehicles is more than 5 and less than or equal to 15, marking the clear confidence coefficient as 0.7; the number of vehicles is greater than 15 and less than or equal to 20, the clear confidence coefficient is marked as 0.4, the number of vehicles is greater than 20, and the clear confidence coefficient is marked as 0.2.
In some embodiments, a first threshold and a second threshold of the clear confidence coefficient are set, the first threshold being greater than the second threshold;
if the calculated unblocked self-confidence coefficient is not less than the set first threshold value, directly marking a road of a shooting area of the camera as unblocked, and directly switching a next camera picture for analysis; the unblocked self-confidence coefficient is between a first threshold value and a second threshold value, the image data of a first set frame number of the current camera is continuously obtained, the average value of the unblocked self-confidence coefficient is calculated, whether the current road is congested is determined according to the average value of the unblocked self-confidence coefficient, and the current road is switched to the next camera for image acquisition; the unblocked self-confidence coefficient is not greater than a second threshold value, the image data of a second set frame number of the current camera is continuously obtained, the average value of the unblocked self-confidence coefficient is calculated, whether the current road is congested is determined according to the average value of the unblocked self-confidence coefficient, and the current road is switched to the next camera for image acquisition;
wherein the first set frame number is greater than the second set frame number.
Specifically, in this embodiment, the first threshold of the clear confidence coefficient may be set to 0.9, and the second threshold may be set to 0.5.
If the current camera unblocked self-confidence coefficient sigma 1 is more than or equal to 0.9, the road of the camera is directly marked as unblocked, and the next camera picture analysis is directly switched.
If the unblocked self-confidence coefficient of the camera is 0.9>σ1>0.5, if the clear confidence coefficient of the image is not high, at the moment, the image is not switched to the next camera picture, continuously acquiring the continuous 20 frames of pictures of the camera, adding and averaging the clear confidence coefficients of the 20 pictures, and marking the average as
Figure SMS_1
If->
Figure SMS_2
It is defined as clear if
Figure SMS_3
It is defined as congestion.
If the clear confidence coefficient sigma 1 of the camera is less than or equal to 0.5, the clear confidence coefficient analyzed by the image is extremely low, at the moment, the next camera picture is not switched, the continuous 10 frames of pictures of the camera are continuously obtained, each analysis of the clear confidence coefficient of the camera, the clear confidence coefficients of the 10 pictures are summed and averaged, and the average is marked as
Figure SMS_4
If (if)
Figure SMS_5
It is defined as clear if +.>
Figure SMS_6
It is defined as congestion.
In this embodiment, two intervals are set for the clear confidence coefficient, where an interval not smaller than the set first threshold value of the clear confidence coefficient is an interval determined to be clear; the interval of the unblocked confidence coefficient between the first threshold and the second threshold is a possibly unblocked interval, and the interval is monitored in a key way so as to realize the accuracy of monitoring.
In the step 2, a traffic event three-classification model is adopted to carry out vehicle detection and identification on the acquired vehicle image, and the traffic jam condition is obtained.
Specifically, the traffic event three-classification model can adopt a Caffe general deep learning framework for classifying smoothness, congestion and accidents;
the traffic event three-classification model comprises a vehicle identification network, a vehicle counting module, a vehicle movement identification module and a fusion output module which are connected in sequence;
the vehicle identification network is used for identifying vehicles in the vehicle image and selecting vehicle targets by adopting target frame;
the vehicle counting module is used for counting vehicle target frames selected by the vehicle identification network frame;
the vehicle movement identification module is used for identifying the same vehicle movement distance in the adjacent frame images according to the vehicle target frame selected by the vehicle identification network frame;
and the fusion output module is used for judging the congestion condition according to the vehicle movement displacement, judging whether an accident occurs or not, or/and identifying the congestion mileage.
In some embodiments, the vehicle identification network implements identification and detection of various types of vehicles through a detection algorithm, and establishes a vehicle identification detection model for identifying the number and the position of the vehicles. The vehicle recognition network adopts a Caffe general deep learning framework, and the training process of the vehicle recognition network is as follows:
21 Video data acquired by a camera are acquired, and the video data are analyzed into image data to be used as a data set through a video coding and decoding module;
wherein the data set contains different types of vehicles.
22 Marking each type of vehicle in the image of the dataset;
specifically, 10000 images can be selected, and various types of vehicles in the images can be marked.
23 Extracting characteristics of the vehicle based on the tag data, and regularizing the characteristics of the vehicle;
24 Aiming at the data after feature regularization, dividing a training set and a verification set of a vehicle identification network;
25 A Caffe general deep learning framework is adopted to construct a vehicle identification network;
26 Training a vehicle identification network to identify vehicle information in each image, and obtaining network parameters after training;
27 Testing the trained vehicle identification network based on the images of the verification set until the accuracy requirement is met.
The vehicle counting module is configured to count vehicles in the image, specifically, a counter sum is set for a recognition result output by a vehicle recognition network, the initial value of the counter is set to 0, and the value of the counter is increased by 1 when 1 vehicle is detected, namely sum=sum+1; after the identification is completed, the sum value is the number of vehicles in the image. The number of vehicles output by the module can be used for calculating a smooth confidence coefficient.
The vehicle movement identification module is used for identifying whether the vehicle moves or not and identifying the moving distance of the same vehicle in two adjacent frames of images; specifically, the identification method may be as follows:
2.1 For the current camera, acquiring video adjacent frames according to a set time interval, and extracting video image data;
alternatively, the set time interval may be set to a few seconds, and preferably, may be set to 1 second.
2.2 Detecting two adjacent frames of images, wherein the pixel displacement of the rectangle with the same size of the vehicle target frame is the moving displacement value of the vehicle, and the moving speed can be obtained according to the interval time;
in some embodiments, the fusion output module implements congestion and accident identification, including the steps of:
211 Setting a critical value x of the moving distance of the vehicle;
if the displacement is smaller than the set critical value x, the vehicle can be considered to move slowly, and if the displacement is zero, the vehicle is stationary;
212 For each vehicle, calculating a displacement value, and if the vehicles exceeding a set proportion are smaller than a critical value x, judging that the vehicle is jammed;
alternatively, the set proportion may be 70%, and if more than 70% of the vehicle moving distance is less than x, the vehicle is jammed.
213 According to the time sequence, detecting a vehicle target frame of the vehicle position, and judging that accidents are possible if a set number of vehicles stop and do not move; the set number of vehicles stop and are identified as congestion, and the accident is determined to occur;
further, the congestion mileage is identified, specifically, the distance through the congestion cameras is measured and calculated, and the congestion length is calculated through the distance between the cameras, specifically, the steps are as follows:
21.1 Aiming at the camera which detects congestion, acquiring coordinate information of the camera;
21.2 Traversing cameras adjacent to the congestion camera to obtain all continuous congestion cameras;
21.3 For continuous congestion cameras, calculating the distance between every two congestion cameras, wherein the sum of the distances is the congestion mileage.
The congestion camera is the camera which detects congestion. According to the setting rule of the coordinates of the cameras, the distance between the cameras is calculated according to the coordinate information, and the cameras all have registered coordinate information in an expressway management system, for example, the number of one camera is G35 TV99 K207+340, wherein K207+340 is the represented position information.
And calculating the distance between the two cameras through the position information of the two cameras, wherein if the number of the other camera is G35T99K209+342, the distance between the two cameras is (209-207) 1K+ (342-340), and the distance is 2002 meters.
Optionally, the camera with detected congestion is marked as BUSYCAM, and the adjacent camera is traversed forwards and backwards, if the adjacent camera is also congested, the traversing is continued until the next camera does not detect congestion; and marking the group of traversal as the congested cameras, recording the positions of all the cameras in the group, and acquiring the maximum value and the minimum value of the positions, wherein the difference value of the maximum value and the minimum value is the distance between the congested cameras, namely the congested length.
The conventional AI congestion detection can only identify that congestion exists in the picture, the congestion direction cannot be judged, the congestion on any side of the isolation zone can be determined as the congestion on the road section where the camera is located, but the congestion on which side of the isolation zone, namely, the specific road, cannot be determined. In addition, the existing monitoring camera is a ball camera, and the camera can rotate at any time, so that the detection difficulty is increased.
In step 3, the second classification model of the congestion accident direction is used for classifying whether the left road or the right road of the congested road section is congested, and specifically is configured to execute the following procedures:
31 Using fast-RCNN to identify the isolation belt or the intermediate line;
the Fast-RCNN is a target detection algorithm, and an RPN candidate frame generation algorithm is provided on the basis of Fast-RCNN, so that the target detection speed is greatly improved.
32 Dividing the image by the identified isolation belt or intermediate line to obtain two new images;
alternatively, openCV may be employed to segment the image from the isolation belt;
OpenCV (Open source Computer Vision Library ) is a set of API function libraries for open source code for computer vision.
33 For each segmented image, calculating a smooth confidence coefficient according to the number of vehicles in the image, and judging whether the road corresponding to the image is congested;
34 Recognizing the direction of the head in the image, and determining the direction of the congested road.
The calculation method of the unblocked self-confidence coefficient is the same as the calculation method in the step 1, and is not repeated here.
Optionally, whether the road is smooth or not can be judged by setting a threshold value, and whether the road is smooth or not is judged according to the value of the smooth self-confidence coefficient. The judging method is the same as the method, and in a specific example, sigma 1 is more than or equal to 0.9, the corresponding area of the image is smooth, and the confidence coefficient of any side is 0.5<σ1<0.9 continuously taking 20 frames of images of the camera for further analysis, and then taking an average value as the average value to obtain an average value
Figure SMS_7
If->
Figure SMS_8
It is defined as clear if +.>
Figure SMS_9
It is defined as congestion. If only left confidence coefficient +>
Figure SMS_10
If the left side is determined to be congested, if only the right side is confidence coefficient +>
Figure SMS_11
And judging that the right side is jammed. If the confidence coefficient of the left side and the right side is +>
Figure SMS_12
And judging that the traffic is bidirectional congestion.
The congestion accident direction classification model comprises a middle line or isolation belt identification module, a vehicle head identification module and an accident direction judgment module.
The method for detecting the direction of the vehicle head by using the middle line or the middle isolation belt and the vehicle head creatively detects the specific road of the two roads in monitoring.
The intermediate line or isolation belt identification module is specifically configured to identify and extract an intermediate isolation belt or intermediate line by adopting a fast-RCNN algorithm; whether left or right congestion of the road is identified based on the identified median or median.
Taking the isolation belt as an example, the identification method of the high-speed isolation belt adopts an image characteristic identification mode, the high-speed upper road is gray black, the middle isolation belt is green, the middle guardrail is mostly silver or green, the two sides and the middle are distinguished by obvious color boundaries, and the isolation belt can be identified by adopting a Faster-RCNN algorithm according to color change.
The vehicle head identification module: the system is configured to recognize the head direction by adopting a fast-RCNN algorithm according to images of adjacent frames acquired by the camera, and further recognize whether the vehicle comes or goes.
Optionally, the method for judging the direction of the vehicle head may be: according to the time sequence, if the contour of the same vehicle in two adjacent frames of images is bigger and bigger, the vehicle in the images is a vehicle head, and if the contour is smaller and bigger, the vehicle tail is the vehicle tail.
The accident direction judging module is configured to determine the congestion accident direction according to the direction of the current camera, the left congestion or the right congestion of the identified lane and the direction of the vehicle head.
As shown in fig. 3, if the left side of the isolation belt is jammed and the direction of the coming vehicle is jammed, the road 1 is jammed, and if the camera is rotated 180 °, the camera is recognized as the right side jammed and the direction of the going vehicle is jammed, and the road 1 is jammed when the situation is judged as well.
Further, the technical scheme further comprises the steps of obtaining feedback data of traffic accident or congestion data given by a worker to the system and correcting deviation of accident or congestion judgment. Specifically, a through-deviation data set is constructed, and a training model is continuously strengthened to enhance the accuracy of accident or congestion identification. In the initial stage, the machine is manually participated, and verification is carried out according to the scoring result of the machine; manually adjusting the congestion and accident probability; and (5) revising the model according to the adjusted score, and improving the accident prediction effect of the model.
The method and the device for classifying the road normal driving, the accident and the congestion can classify the road normal driving, the accident and the congestion, can identify the congestion mileage and the congestion direction, can identify finer road conditions, and can realize high-efficiency detection by adopting a polling mechanism and improve the real-time performance of detection.
Example 2
Based on embodiment 1, the embodiment provides a multi-path traffic accident and congestion detection system based on deep learning, which includes:
the camera polling control module: the system is configured to access the cameras in sequence by adopting a polling mechanism and congestion pre-judgment, and acquire vehicle images acquired by the cameras;
and a congestion condition identification module: the system is configured to be used for carrying out vehicle detection and identification by adopting a trained traffic event three-classification model according to the acquired vehicle image to obtain traffic jam conditions;
a congestion accident direction identification module: the vehicle image acquisition module is configured to acquire a vehicle image of a traffic congestion road section, identify by adopting a congestion accident direction two-classification model, obtain a lane middle line or a separation zone and a vehicle head direction by identification, and determine the congestion accident direction;
and the traffic event three-classification model sequentially identifies vehicles in the image, the number of the vehicles in the image and the moving distance of the vehicles, and fuses the number of the vehicles and the moving distance of the vehicles to obtain traffic jam conditions.
Here, the modules in this embodiment are in one-to-one correspondence with the steps in embodiment 1, and the implementation process is the same, which is not described here.
Example 3
The present embodiment provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the steps recited in the method of embodiment 1.
Example 4
The present embodiment provides a computer readable storage medium storing computer instructions that, when executed by a processor, perform the steps of the method of embodiment 1.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (10)

1. The multi-path traffic accident and congestion detection method based on deep learning is characterized by comprising the following steps:
sequentially accessing the cameras by adopting a polling mechanism and congestion pre-judgment, and acquiring vehicle images acquired by the cameras;
according to the acquired vehicle image, a trained traffic event three-classification model is adopted to carry out vehicle detection and identification, and a traffic jam condition is obtained;
acquiring a vehicle image of a traffic jam road section, identifying by adopting a second classification model of the jam accident direction, obtaining a lane middle line or a separation zone and a vehicle head direction by identification, and determining the jam accident direction;
and the traffic event three-classification model sequentially identifies vehicles in the image, the number of the vehicles in the image and the moving distance of the vehicles, and fuses the number of the vehicles and the moving distance of the vehicles to obtain traffic jam conditions.
2. The deep learning-based multi-path traffic accident and congestion detection method as set forth in claim 1, wherein: the method for sequentially accessing the cameras by adopting a polling mechanism adopts a dichotomy, predicts the possibility of road congestion in each frame of image, and switches to the next camera for image acquisition when the road of the shooting area of the current camera is predicted to be smooth; otherwise, when the road of the shooting area of the current camera is predicted to be congested, continuously acquiring the vehicle image data of the current camera.
3. The deep learning-based multi-path traffic accident and congestion detection method as set forth in claim 2, wherein: predicting the possibility of smoothness of a road in each frame of image, wherein the prediction method is to identify the number of vehicles in a picture, set the numerical value of a smoothness confidence coefficient according to the number of the vehicles, and judge whether the road is smooth or not according to the numerical value of the smoothness confidence coefficient;
setting a first threshold and a second threshold of a smooth confidence coefficient, wherein the first threshold is larger than the second threshold;
if the unblocked confidence coefficient is not less than the set first threshold, the road of the shooting area of the camera is unblocked, and the next camera picture is directly switched for analysis;
the unblocked self-confidence coefficient is between a first threshold value and a second threshold value, the image data of a first set frame number of the current camera is continuously obtained, the average value of the unblocked self-confidence coefficient is calculated, whether the current road is congested is determined according to the average value of the unblocked self-confidence coefficient, and the current road is switched to the next camera for image acquisition;
the unblocked self-confidence coefficient is smaller than a second threshold value, the image data of a second set frame number of the current camera is continuously obtained, the average value of the unblocked self-confidence coefficient is calculated, whether the current road is congested is determined according to the average value of the unblocked self-confidence coefficient, and the current road is switched to the next camera for image acquisition;
wherein the first set frame number is greater than the second set frame number.
4. The deep learning-based multi-path traffic accident and congestion detection method as set forth in claim 1, wherein: the traffic event three-classification model comprises a vehicle identification network, a vehicle counting module, a vehicle movement identification module and a fusion output module which are connected in sequence;
the vehicle identification network is used for identifying vehicles in the vehicle image and selecting vehicle targets by adopting target frame;
the vehicle counting module is used for counting vehicle target frames selected by the vehicle identification network frame;
the vehicle movement identification module is used for identifying the same vehicle movement distance in the adjacent frame images according to the vehicle target frame selected by the vehicle identification network frame;
and the fusion output module is used for judging the congestion condition according to the vehicle counting result and the vehicle movement displacement, judging whether an accident occurs or not, or/and identifying the congestion mileage.
5. The multi-path traffic accident and congestion detection method based on deep learning as claimed in claim 4, wherein the fusion output module implements the identification of congestion and accidents, comprising the steps of:
setting a critical value of the moving distance of the vehicle;
solving displacement values in two adjacent frames of images for each vehicle, and judging that the vehicle is jammed if the displacement value of the vehicle exceeding the set proportion is smaller than a critical value x;
extracting frame images according to the time sequence, detecting a vehicle target frame of the vehicle position, and judging that accidents are possible if a set number of vehicles stop and do not move; the set number of vehicles are stopped and identified as being congested, and an accident is determined to occur.
6. The multi-path traffic accident and congestion detection method based on deep learning as claimed in claim 4, wherein the congestion mileage is identified by the distance of the congestion camera, specifically comprising the steps of:
aiming at the camera which detects congestion, acquiring coordinate information of the camera;
traversing cameras adjacent to the congestion camera to obtain all continuous congestion cameras;
and aiming at continuous congestion cameras, calculating the distance between every two congestion cameras, wherein the sum of the distance is the congestion mileage.
7. The deep learning-based multi-path traffic accident and congestion detection method as set forth in claim 1, wherein:
the congestion accident direction classification model comprises a middle line or isolation belt identification module, a locomotive identification module and an accident direction judgment module;
intermediate line or isolation belt identification module: is configured to identify and extract intermediate isolation strips or intermediate lines by using a fast-RCNN algorithm; judging whether the road is left-side congestion or right-side congestion according to the identification result;
the vehicle head identification module: the system is configured to identify the direction of a vehicle head by adopting a fast-RCNN algorithm according to images of adjacent frames acquired by a camera, so as to identify the direction of the coming vehicle or the direction of the coming vehicle;
according to the time sequence, if the contour of the same vehicle in two adjacent frames of images is bigger and bigger, the vehicle in the images is a vehicle head, and if the contour is smaller and bigger, the vehicle is a vehicle tail;
the accident direction judging module is configured to recognize whether the obtained lane is left-side jammed or right-side jammed according to the direction of the current camera, and determine the jam accident direction based on the head direction.
8. Multipath section traffic accident and congestion detecting system based on deep learning, its characterized in that includes:
the camera polling control module: the system is configured to access the cameras in sequence by adopting a polling mechanism and congestion pre-judgment, and acquire vehicle images acquired by the cameras;
and a congestion condition identification module: the system is configured to be used for carrying out vehicle detection and identification by adopting a trained traffic event three-classification model according to the acquired vehicle image to obtain traffic jam conditions;
a congestion accident direction identification module: the vehicle image acquisition module is configured to acquire a vehicle image of a traffic congestion road section, identify by adopting a congestion accident direction two-classification model, obtain a lane middle line or a separation zone and a vehicle head direction by identification, and determine the congestion accident direction;
and the traffic event three-classification model sequentially identifies vehicles in the image, the number of the vehicles in the image and the moving distance of the vehicles, and fuses the number of the vehicles and the moving distance of the vehicles to obtain traffic jam conditions.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the steps of the method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of any of claims 1-7.
CN202310429538.3A 2023-04-21 2023-04-21 Multi-path traffic accident and congestion detection method and system based on deep learning Active CN116153086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310429538.3A CN116153086B (en) 2023-04-21 2023-04-21 Multi-path traffic accident and congestion detection method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310429538.3A CN116153086B (en) 2023-04-21 2023-04-21 Multi-path traffic accident and congestion detection method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN116153086A true CN116153086A (en) 2023-05-23
CN116153086B CN116153086B (en) 2023-07-18

Family

ID=86354664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310429538.3A Active CN116153086B (en) 2023-04-21 2023-04-21 Multi-path traffic accident and congestion detection method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN116153086B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275236A (en) * 2023-10-11 2023-12-22 宁波宁工交通工程设计咨询有限公司 Traffic jam management method and system based on multi-target recognition

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7458547B1 (en) 2023-11-07 2024-03-29 株式会社インターネットイニシアティブ Information processing device, system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092187A1 (en) * 2010-10-13 2012-04-19 Harman Becker Automotive Systems Gmbh Traffic event monitoring
CN106758615A (en) * 2016-11-25 2017-05-31 上海市城市建设设计研究总院 Improve the method to set up of high-density development section road network traffic efficiency
CN107742418A (en) * 2017-09-29 2018-02-27 东南大学 A kind of urban expressway traffic congestion status and stifled point position automatic identifying method
JP2018081504A (en) * 2016-11-16 2018-05-24 富士通株式会社 Traffic control device, traffic control method, and traffic control program
CN110688922A (en) * 2019-09-18 2020-01-14 苏州奥易克斯汽车电子有限公司 Deep learning-based traffic jam detection system and detection method
CN111899514A (en) * 2020-08-19 2020-11-06 陇东学院 Artificial intelligence's detection system that blocks up
CN112907981A (en) * 2021-03-25 2021-06-04 东南大学 Shunting device for shunting traffic jam vehicles at intersection and control method thereof
CN113936458A (en) * 2021-10-12 2022-01-14 中国联合网络通信集团有限公司 Method, device, equipment and medium for judging congestion of expressway

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092187A1 (en) * 2010-10-13 2012-04-19 Harman Becker Automotive Systems Gmbh Traffic event monitoring
JP2018081504A (en) * 2016-11-16 2018-05-24 富士通株式会社 Traffic control device, traffic control method, and traffic control program
CN106758615A (en) * 2016-11-25 2017-05-31 上海市城市建设设计研究总院 Improve the method to set up of high-density development section road network traffic efficiency
CN107742418A (en) * 2017-09-29 2018-02-27 东南大学 A kind of urban expressway traffic congestion status and stifled point position automatic identifying method
CN110688922A (en) * 2019-09-18 2020-01-14 苏州奥易克斯汽车电子有限公司 Deep learning-based traffic jam detection system and detection method
CN111899514A (en) * 2020-08-19 2020-11-06 陇东学院 Artificial intelligence's detection system that blocks up
CN112907981A (en) * 2021-03-25 2021-06-04 东南大学 Shunting device for shunting traffic jam vehicles at intersection and control method thereof
CN113936458A (en) * 2021-10-12 2022-01-14 中国联合网络通信集团有限公司 Method, device, equipment and medium for judging congestion of expressway

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275236A (en) * 2023-10-11 2023-12-22 宁波宁工交通工程设计咨询有限公司 Traffic jam management method and system based on multi-target recognition
CN117275236B (en) * 2023-10-11 2024-04-05 宁波宁工交通工程设计咨询有限公司 Traffic jam management method and system based on multi-target recognition

Also Published As

Publication number Publication date
CN116153086B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN116153086B (en) Multi-path traffic accident and congestion detection method and system based on deep learning
CN103049787B (en) A kind of demographic method based on head shoulder feature and system
CN105844234B (en) Method and equipment for counting people based on head and shoulder detection
CN104751634B (en) The integrated application method of freeway tunnel driving image acquisition information
CN101877058B (en) People flow rate statistical method and system
CN105844229B (en) A kind of calculation method and its system of passenger&#39;s crowding
CN103246896B (en) A kind of real-time detection and tracking method of robustness vehicle
CN113326719A (en) Method, equipment and system for target tracking
WO2011097795A1 (en) Method and system for population flow statistics
CN112766038B (en) Vehicle tracking method based on image recognition
CN116434159A (en) Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN117437599B (en) Pedestrian abnormal event detection method and system for monitoring scene
CN110633678A (en) Rapid and efficient traffic flow calculation method based on video images
CN111476160A (en) Loss function optimization method, model training method, target detection method, and medium
CN111695545A (en) Single-lane reverse driving detection method based on multi-target tracking
CN106023650A (en) Traffic intersection video and computer parallel processing-based real-time pedestrian early-warning method
CN114973207A (en) Road sign identification method based on target detection
CN110674887A (en) End-to-end road congestion detection algorithm based on video classification
CN115565157A (en) Multi-camera multi-target vehicle tracking method and system
CN115909223A (en) Method and system for matching WIM system information with monitoring video data
CN110163125B (en) Real-time video identification method based on track prediction and size decision
CN112329671B (en) Pedestrian running behavior detection method based on deep learning and related components
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN112509338A (en) Method for detecting traffic jam event through silent low-point video monitoring
CN115035543B (en) Big data-based movement track prediction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant