CN113158835A - Traffic accident intelligent detection method based on deep learning - Google Patents
Traffic accident intelligent detection method based on deep learning Download PDFInfo
- Publication number
- CN113158835A CN113158835A CN202110348720.7A CN202110348720A CN113158835A CN 113158835 A CN113158835 A CN 113158835A CN 202110348720 A CN202110348720 A CN 202110348720A CN 113158835 A CN113158835 A CN 113158835A
- Authority
- CN
- China
- Prior art keywords
- traffic accident
- model
- data set
- video
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention relates to an intelligent traffic accident detection method based on deep learning, which comprises the following steps: s1, collecting traffic accident monitoring videos as a video test set and a picture training set; s2, building a ResNet-50 model; s3, training a ResNet-50 model by using a ResNet-50 picture data set; s4, constructing an improved YOLOv3 model; s5, training a test improved YOLOv3 model by using a YOLOv3 picture data set; s6, carrying out overall model test by using the video test data set to detect traffic accidents; s7, the gross model passes the test experiment. The method is mainly applied to traffic road video monitoring scenes, combines ResNet-50 and YOLOv3, and utilizes a spatial pyramid pool network structure SPP-net and a K-Means clustering algorithm K-Means to optimize, so that the detection precision is improved.
Description
Technical Field
The invention relates to the technical field of deep learning image and video analysis, in particular to an intelligent traffic accident detection method based on deep learning.
Background
With the continuous development of deep learning, people can make more intensive research on the field of image recognition. The image and video analysis technology based on deep learning is applied to a traffic accident video monitoring system, so that the traffic accident can be automatically detected without manpower, related personnel can be informed of the occurrence of the traffic accident in time, rescue service can be provided for accident victims as soon as possible, secondary accidents can be avoided, and casualties and financial loss are greatly reduced and even avoided.
Many methods are currently applied in the field of detecting traffic accidents: extracting a foreground and a background from a video shot based on a Gaussian Mixture Model (GMM) to detect a vehicle, tracking the detected vehicle based on a mean shift algorithm, and analyzing the change of the parameters of the position, the acceleration and the direction of the running vehicle by collecting the parameters to serve as an accident decision basis; the method is also realized based on a support vector machine, random forest, track clustering technology or monitoring by installing a hardware sensor on a vehicle. However, traffic accidents have various forms and complex characteristics, and there are many external factors affecting the judgment of traffic accidents, such as weather changes, road environments, video monitoring angles, equipment states, and the like, so that in the existing detection algorithms, especially those implemented based on the traditional target detection method, there are some defects: the detection precision is not high due to the influence of weather change and road environment change; part of algorithms can only detect the occurrence of traffic accidents, but cannot detect which accident scene belongs to; the algorithm has long operation time and large memory consumption.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides an intelligent traffic accident detection method based on deep learning, which reduces the influence of weather change and road environment change on an algorithm by detecting a traffic accident by adopting a ResNet-50 network structure with strong representation capability, realizes the detection of a specific traffic accident scene by adopting a YOLOv3 algorithm with high detection speed, optimizes the network structure, solves the problems of long arithmetic operation time and low detection fine granularity, and realizes the improvement of detection precision.
The invention is realized by adopting the following technical scheme: a traffic accident intelligent detection method based on deep learning mainly comprises the following steps:
s1, collecting traffic accident monitoring videos as a video test set and a picture training set, and processing a ResNet-50 picture data set and a YOLOv3 picture data set;
s2, building a ResNet-50 deep convolution neural network model;
s3, training a ResNet-50 deep convolution neural network model by using a ResNet-50 picture data set;
s4, constructing an improved YOLOv3 model, and optimizing by utilizing a spatial pyramid pool network structure SPP-net and a K-Means clustering algorithm K-Means;
s5, training an improved YOLOv3 model by using a YOLOv3 picture data set, and detecting 5 traffic accident scenes of vehicle damage, vehicle turnover, non-motor vehicle turnover, casualties and fire;
s6, carrying out overall model test by using the video test data set, detecting a traffic accident and acquiring specific accident scene information;
s7, the gross model passes the test experiment.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention adopts the ResNet-50 network structure with strong representation capability to detect the traffic accidents, so as to reduce the influence of weather change and road environment change on the algorithm, adopts the YOLOv3 algorithm with high detection speed to realize the detection of specific traffic accident scenes, optimizes the network structure, solves the problems of long arithmetic operation time and low detection fine granularity, and utilizes the spatial pyramid pool network structure SPP-net and the K-Means clustering algorithm K-Means to optimize, thereby realizing the improvement of the detection precision.
2. The method can inform relevant personnel of the occurrence of the traffic accident in time, can provide rescue service for accident victims as soon as possible, can avoid the occurrence of secondary accidents, and greatly reduces or even avoids casualties and financial loss.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of the ResNet-50 neural network architecture of the present invention;
FIG. 3 is a diagram of the neural network architecture of a modified version of YOLOv3 of the present invention;
FIG. 4 is a simulation diagram of the ResNet-50 model experiment of the present invention;
FIG. 5 is a diagram showing the results of detecting traffic accidents by the ResNet-50 algorithm of the present invention;
fig. 6 is a display diagram showing the result of detecting a specific traffic accident by using the improved YOLOv3 algorithm.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
As shown in fig. 1, the traffic accident intelligent detection method based on deep learning of the embodiment includes the following steps:
s1, collecting traffic accident monitoring videos as a video test set and a picture training set, processing the videos into frame sequences by using an OpenCV (open computer vision library), extracting traffic accident frame pictures and non-traffic accident frame pictures, dividing the traffic accident frame pictures into accidents and no-accidents, and corresponding to traffic accidents and non-traffic accidents; labeling category labels for the accidents pictures by using LabelImg, wherein the labels are classified into 5 types: damage, CarDump, TwoWheelDump, PersonDump, Fire, corresponding to vehicle Damage, rollover, non-motor vehicle rollover, casualties, Fire 5-class traffic accident scenarios.
In this embodiment, the specific process of collecting the traffic accident monitoring video is as follows:
s111, collecting 200 monitoring videos, wherein the video length is 0-3 minutes, and each video content comprises scenes before, during and after an accident;
s112, taking 20% of the monitoring videos as video test input, performing video frame sequence conversion processing on the remaining 80%, and filtering 667 images of the classes of the accudences and 646 images of no-accudences to form a ResNet-50 image data set for training and testing a ResNet-50 model;
s113, 5 specific traffic accident scene picture collection is added, 4421 pictures are obtained in total, and label processing is conducted to form a YOLOv3 picture data set for training and testing an improved YOLOv3 model.
S2, as shown in FIG. 2, building a ResNet-50 deep convolution neural network model, which comprises the following specific processes:
s21, constructing a residual error structure to solve the problem of deep neural network degradation, wherein the residual error structure design operator generally has the following two forms:
y=F(x,{Wi})+x (1)
y=F(x,{Wi})+Wsx (2)
where y denotes an output characteristic of the residual structure, x denotes an input characteristic of the residual structure, and F (x, { W)i}) indicates that the input feature x has one or more weights WiConvolution structure processing of, WsProcessing the weight of the input characteristic convolution structure; the residual structure of this embodiment includes 3 convolution structures, each of which is composed of a convolution layer, a Batch Normalization layer (Batch Normalization), and an activation function;
s22, building a ResNet-50 convolutional neural network based on the residual error structure and the convolutional structure, wherein the network structure is shown in FIG. 2;
s23, processing a forward propagation process, constructing a loss function, taking cross entropy as the loss function, and normalizing output components corresponding to each category by using a Softmax activation function:
wherein S isiRepresenting the normalized prediction output component, y, corresponding to the class iiRepresenting the original output component, y, corresponding to the class ijRepresenting the original output components corresponding to the category j, wherein n categories are provided;
and then calculating the cross entropy, wherein the calculation formula is as follows:
wherein, EP (S'i,Si) Is expressed as cross entropy, S'iExpressed as normalized true output for class iOutputting a component;
s24, building an Adma optimizer, integrating the model and creating a model operation session.
S3, training a ResNet-50 deep convolution neural network model by using a ResNet-50 picture data set, wherein the specific process is as follows:
s31, compiling a training script based on python;
s32, configuring training super parameters, learning rate lr, batch processing number batc _ size, first-order estimated attenuation factor beta1 and second-order estimated attenuation factor beta 2; executing a training script and carrying necessary parameters, a data set file path and iteration times; the ResNet-50 model is trained, the weight parameters are automatically finely adjusted in the process, and finally the ResNet-50 deep convolution neural network model regression is obtained, and a simulation precision curve is shown in FIG. 4, so that in the embodiment, the precision of the training set and the precision of the testing set both reach 98%, which shows that the ResNet-50 deep convolution neural network is used for detecting traffic accidents, and the effect is obvious;
s33, preparing 50 pictures of traffic accidents and non-traffic accidents respectively, and calculating the detection rate dr and the false detection rate fdr of the regression model, wherein the two detection index formulas are as follows:
wherein dn is the number of detected traffic accidents, atn is the total number of traffic accident pictures in the data set, fdn is the number of detection result errors, and tn is the total number of the data set;
and compare with the thesis "Computer Vision-based Accident Detection in Traffic Surveillance", this thesis has proposed and applied Mask R-CNN algorithm to detect the vehicle target, and then the target tracking algorithm of monitoring lens based on effective centroid, speed and orbit in the vehicle after overlapping with other vehicles are regarded as the basis to come on whether the probability of the Accident happens in the Traffic Accident, Table 1 is the Detection rate dr of the regression model of the invention Resnet-50 deep convolution neural network model and comparison thesis, false drop rate fdr index comparison result of the regression model, wherein, the result of comparing the thesis comes from this thesis:
TABLE 1 dr, fdr index comparison results
The results of the indexes in S32 and S33 can be analyzed to draw a conclusion, and the ResNet-50 deep neural network model is applied to detecting the traffic accident.
S4, constructing an improved YOLOv3 model, adding three spatial pyramid pool network structures (SPP-net) in an original Darknet53 network structure, solving the problem of unfixed input size, clustering a YOLOv3 data set by using a K-Means clustering algorithm (K-Means), obtaining a priori box initial value, and configuring the priori box initial value in an improved YOLOv3 network structure configuration file, wherein the specific process is as follows:
s41, calculating an initial prior frame value corresponding to the improved YOLOv3 picture data set by adopting a K-Means algorithm, and firstly defining an iou calculation function:
intse=(yi2-yi1)×(xi2-xi1) (8)
union=box_pre+box_rela-intse (9)
wherein, iou is expressed as the proportion of the intersection area and the union area of the predicted frame and the real frame, intse is expressed as the intersection area of the predicted frame and the real frame, union is expressed as the union area of the predicted frame and the real frame, and xi1、yi1Respectively representing the maximum value taken by the upper left corner coordinate of the predicted frame of the category i and the maximum value, y, taken by the upper left corner coordinate of the real frame of the category ii2、yi2Are respectively provided withExpressing the maximum value taken by the corner coordinate at the lower right corner of the predicted frame of the category i and the maximum value taken by the coordinate at the lower right corner of the real frame of the category i, and respectively expressing box _ pre and box _ rela as the area of the predicted frame and the area of the real frame;
the objective function f) iou) for K-Means is defined as follows:
f(iou)=1-iou (10)
realizing data set clustering through coding, acquiring 9 pairs of prior frame initial values, arranging the prior frame initial values from small to large, and assigning the prior frame initial values into a YOLOv3 network structure configuration file;
s42, building a YOLOv3 network model, and adding SPP-net in a YOLO prediction network, wherein the final network structure is shown in FIG. 3.
S43, processing a forward propagation process, and constructing a cross entropy loss function;
s44, building an SGD optimizer, integrating the models and creating model operation sessions.
S5, training an improved YOLOv3 model by using the prepared YOLOv3 picture data set in the step S1, wherein the specific process is as follows:
s51, allocating a YOLOv3 data set into 80% of training pictures and 20% of test verification pictures, generating text files of corresponding picture paths, and establishing a class name file, wherein the file contains 5 classes of names, and the names are respectively as follows: damage, CarDump, TwoWheelDump, PersonDump, Fire, corresponding to vehicle Damage, rollover, non-motor vehicle rollover, casualties, Fire category 5 specific scenarios;
s52, establishing a data set related configuration text file in a YOLOv3 model project, wherein the configuration project comprises: detecting the total number of categories, training data set paths, verification data set paths and category label name files;
s53, converting the VOOLOv 3 data set VOC format label into a YOLO format label;
s54, setting a super-parameter initial value, configuring an improved YOLOv3 network configuration file, and configuring a priori frame initial value obtained by using a K-Means algorithm;
s55, training an improved YOLOv3 network by using ImageNet data set to obtain a pre-training weight file;
and S56, executing the training script and introducing necessary parameters, configuring a text file path related to the data set, an improved YOLOv3 network configuration file path, iteration times and a pre-training weight file path, and starting to train an improved YOLOv3 model.
S6, carrying out overall model test by using the video test data set prepared in the step S1, detecting a traffic accident and acquiring specific accident scene information, wherein the specific process is as follows:
s61, adopting 20% of the monitoring video reserved in the step S1 as a test video;
s62, taking the test video as the input of a ResNet-50 deep convolution neural network model, and executing a test script, wherein the required parameters are transmitted to include an input video path, a ResNet-50 weight file path obtained by training and a class name;
s63, obtaining video output after being classified by a ResNet-50 deep convolutional neural network model, marking the corresponding category and the probability of each frame of the video, and displaying a result as shown in figure 5.
S64, taking the video output in the step S63 as the input of an improved YOLOv3 model, and executing a test script, wherein necessary parameters are required to be transmitted, and the necessary parameters comprise an input video path, a trained improved YOLOv3 weight file path, a category name and an improved YOLOv3 network configuration file;
s65, obtaining video output detected by an improved YOLOv3 model, marking 5 traffic accident scenes of detected vehicle damage, vehicle turnover, non-motor vehicle turnover, casualties and fire by using a video through a label and a frame, wherein the detection result is shown in figure 6, and the improved YOLOv3 model can well detect 5 specific traffic accident scenes, and has higher fine granularity compared with an algorithm which can only detect whether a traffic accident occurs.
S7, the general model can realize traffic accident intelligent detection and specific traffic accident type detection through a test experiment.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (8)
1. The traffic accident intelligent detection method based on deep learning is characterized by comprising the following steps
S1, collecting traffic accident monitoring videos as a video test set and a picture training set, and processing a ResNet-50 picture data set and a YOLOv3 picture data set;
s2, building a ResNet-50 deep convolution neural network model;
s3, training a ResNet-50 deep convolution neural network model by using a ResNet-50 picture data set;
s4, constructing an improved YOLOv3 model, and optimizing by utilizing a spatial pyramid pool network structure SPP-net and a K-Means clustering algorithm K-Means;
s5, training an improved YOLOv3 model by using a YOLOv3 picture data set, and detecting 5 traffic accident scenes of vehicle damage, vehicle turnover, non-motor vehicle turnover, casualties and fire;
s6, carrying out overall model test by using the video test data set, detecting a traffic accident and acquiring specific accident scene information;
s7, the gross model passes the test experiment.
2. The intelligent deep-learning traffic accident detection method according to claim 1, wherein the processing procedure of step S1 is as follows:
s101, processing a video into a frame sequence by utilizing an OpenCV (open video coding library), extracting traffic accident frame pictures and non-traffic accident frame pictures, dividing the traffic accident frame pictures into accidents and no-accidents, and corresponding to a traffic accident class and a non-traffic accident class;
s102, labeling category labels for the acicadens pictures by using LabelImg, wherein the labels are divided into 5 categories, namely Damage, CarDump, TwoWheelDump, PersonDump and Fire, and correspond to 5 traffic accident scenes of vehicle Damage, vehicle rollover, non-motor vehicle rollover, casualties and Fire.
3. The method for intelligently detecting traffic accidents according to the deep learning of claim 1, wherein the collecting of the traffic accident monitoring video in the step S1 comprises the following steps:
s111, collecting a plurality of monitoring videos, wherein the video length is 0-3 minutes, and each video content comprises scenes before, during and after an accident;
s112, taking a plurality of videos as video test input, performing video frame sequence conversion processing on the residual videos, and filtering a plurality of accidents pictures and a plurality of no-accidents pictures to form a ResNet-50 picture data set for training and testing a ResNet-50 model;
s113, adding specific traffic accident scene picture collection, and performing label processing to form a YOLOv3 picture data set for training and testing an improved YOLOv3 model.
4. The intelligent deep-learning traffic accident detection method according to claim 1, wherein the construction of the ResNet-50 deep convolutional neural network model in step S2 is as follows:
s21, constructing a residual structure, wherein a residual structure design operator is in the following two forms:
y=F(x,{Wi})+x (1)
y=F(x,{Wi})+Wsx (2)
where y denotes an output characteristic of the residual structure, x denotes an input characteristic of the residual structure, and F (x, { W)i}) indicates that the input feature x has one or more weights WiConvolution structure processing of, WsProcessing the weight of the input characteristic convolution structure;
s22, building a ResNet-50 convolution neural network based on the residual error structure and the convolution structure;
s23, processing a forward propagation process, constructing a loss function, taking cross entropy as the loss function, and normalizing the output component corresponding to each category by using a Softmax activation function:
wherein S isiRepresenting the normalized prediction output component, y, corresponding to the class iiRepresenting the original output component, y, corresponding to the class ijRepresenting the original output components corresponding to the category j, wherein n categories are provided;
and then calculating the cross entropy, wherein the calculation formula is as follows:
wherein, EP (S'i,Si) Is expressed as cross entropy, S'iExpressed as normalized true output components corresponding to category i;
s24, building an Adma optimizer, integrating the model and creating a model operation session.
5. The intelligent deep-learning traffic accident detection method according to claim 1, wherein the specific process of step S3 is as follows:
s31, compiling a training script based on python;
s32, configuring training super parameters, learning rate lr, batch processing number batc _ size, first-order estimated attenuation factor beta1 and second-order estimated attenuation factor beta 2; executing a training script and carrying necessary parameters, a data set file path and iteration times; training a ResNet-50 model, automatically fine-tuning weight parameters, and obtaining a ResNet-50 deep convolution neural network model regression;
s33, preparing a plurality of traffic accident and non-traffic accident pictures, and calculating the detection rate dr and the false detection rate fdr of the regression model, wherein the two detection index formulas are as follows:
where dn is the number of detected traffic accidents, atn is the total number of pictures of traffic accidents in the data set, fdn is the number of errors in the detection result, and tn is the total number of data sets.
6. The intelligent deep-learning traffic accident detection method according to claim 1, wherein the specific process of step S4 is as follows:
s41, calculating an initial prior frame value corresponding to the improved YOLOv3 picture data set by adopting a K-Means algorithm, and defining an iou calculation function:
intse=(yi2-yi1)×(xi2-xi1) (8)
union=box_pre+box_rela-intse (9)
wherein, iou is expressed as the proportion of the intersection area and the union area of the predicted frame and the real frame, intse is expressed as the intersection area of the predicted frame and the real frame, union is expressed as the union area of the predicted frame and the real frame, and xi1、yi1Respectively representing the maximum value taken by the upper left corner coordinate of the predicted frame of the category i and the maximum value, y, taken by the upper left corner coordinate of the real frame of the category ii2、yi2Respectively representing the maximum value taken by the corner coordinate at the lower right corner of the prediction frame of the category i and the maximum value taken by the coordinate at the lower right corner of the real frame of the category i, and respectively representing box _ pre and box _ rela as the area of the prediction frame and the area of the real frame;
the objective function f (iou) of K-Means is defined as follows:
f(iou)=1-iou (10)
realizing data set clustering through coding, acquiring a plurality of pairs of prior frame initial values, arranging the prior frame initial values from small to large, and assigning the prior frame initial values to a YOLOv3 network structure configuration file;
s42, building a YOLOv3 network model, and adding SPP-net in a YOLO prediction network;
s43, processing a forward propagation process, and constructing a cross entropy loss function;
s44, building an SGD optimizer, integrating the models and creating model operation sessions.
7. The intelligent deep-learning traffic accident detection method according to claim 1, wherein the specific process of step S5 is as follows:
s51, distributing a YOLOv3 data set into 80% of training pictures and 20% of test verification pictures, generating text files of corresponding picture paths, and establishing a class name file, wherein the file contains 5 types of names, namely Damage, CarDump, TwoWheelDump, PersonDump and Fire, and corresponds to 5 types of specific scenes of vehicle Damage, vehicle rollover, non-motor vehicle rollover, casualties and Fire;
s52, establishing a data set related configuration text file in a YOLOv3 model project, wherein the configuration item comprises a detection category total number, a training data set path, a verification data set path and a category label name file;
s53, converting the VOOLOv 3 data set VOC format label into a YOLO format label;
s54, setting a super-parameter initial value, configuring an improved YOLOv3 network configuration file, and configuring a priori frame initial value obtained by using a K-Means algorithm;
s55, training an improved YOLOv3 network by using ImageNet data set to obtain a pre-training weight file;
and S56, executing the training script and introducing necessary parameters, configuring a text file path related to the data set, an improved YOLOv3 network configuration file path, iteration times and a pre-training weight file path, and starting to train an improved YOLOv3 model.
8. The intelligent deep-learning traffic accident detection method according to claim 1, wherein the specific process of step S6 is as follows:
s61, adopting the monitoring video reserved in the step S1 as a test video;
s62, taking the test video as the input of a ResNet-50 deep convolution neural network model, and executing a test script, wherein the required parameters are transmitted to include an input video path, a ResNet-50 weight file path obtained by training and a class name;
s63, obtaining video output after being classified by the ResNet-50 deep convolution neural network model, and marking each frame of the video with the corresponding category and the probability thereof;
s64, taking the video output in the step S63 as the input of an improved YOLOv3 model, and executing a test script, wherein necessary parameters are required to be transmitted, and the necessary parameters comprise an input video path, a trained improved YOLOv3 weight file path, a category name and an improved YOLOv3 network configuration file;
s65, acquiring video output detected by an improved YOLOv3 model, and marking 5 traffic accident scenes, namely, detected vehicle damage, vehicle turnover, non-motor vehicle turnover, casualties and fire, by the video through a label and a frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110348720.7A CN113158835A (en) | 2021-03-31 | 2021-03-31 | Traffic accident intelligent detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110348720.7A CN113158835A (en) | 2021-03-31 | 2021-03-31 | Traffic accident intelligent detection method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113158835A true CN113158835A (en) | 2021-07-23 |
Family
ID=76885772
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110348720.7A Pending CN113158835A (en) | 2021-03-31 | 2021-03-31 | Traffic accident intelligent detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113158835A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202733A (en) * | 2022-02-18 | 2022-03-18 | 青岛海信网络科技股份有限公司 | Video-based traffic fault detection method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345894A (en) * | 2017-01-22 | 2018-07-31 | 北京同方软件股份有限公司 | A kind of traffic incidents detection method based on deep learning and entropy model |
CN108629963A (en) * | 2017-03-24 | 2018-10-09 | 纵目科技(上海)股份有限公司 | Traffic accident report method based on convolutional neural networks and system, car-mounted terminal |
CN109298993A (en) * | 2017-07-21 | 2019-02-01 | 深圳市中兴微电子技术有限公司 | A kind of method, apparatus and computer readable storage medium detecting failure |
CN110033011A (en) * | 2018-12-14 | 2019-07-19 | 阿里巴巴集团控股有限公司 | Traffic accident Accident Handling Method and device, electronic equipment |
CN110175988A (en) * | 2019-04-25 | 2019-08-27 | 南京邮电大学 | Cloth defect inspection method based on deep learning |
CN112084928A (en) * | 2020-09-04 | 2020-12-15 | 东南大学 | Road traffic accident detection method based on visual attention mechanism and ConvLSTM network |
WO2021005590A1 (en) * | 2019-07-05 | 2021-01-14 | Valerann Ltd. | Traffic event and road condition identification and classification |
CN112509315A (en) * | 2020-11-04 | 2021-03-16 | 杭州远眺科技有限公司 | Traffic accident detection method based on video analysis |
-
2021
- 2021-03-31 CN CN202110348720.7A patent/CN113158835A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345894A (en) * | 2017-01-22 | 2018-07-31 | 北京同方软件股份有限公司 | A kind of traffic incidents detection method based on deep learning and entropy model |
CN108629963A (en) * | 2017-03-24 | 2018-10-09 | 纵目科技(上海)股份有限公司 | Traffic accident report method based on convolutional neural networks and system, car-mounted terminal |
CN109298993A (en) * | 2017-07-21 | 2019-02-01 | 深圳市中兴微电子技术有限公司 | A kind of method, apparatus and computer readable storage medium detecting failure |
CN110033011A (en) * | 2018-12-14 | 2019-07-19 | 阿里巴巴集团控股有限公司 | Traffic accident Accident Handling Method and device, electronic equipment |
CN110175988A (en) * | 2019-04-25 | 2019-08-27 | 南京邮电大学 | Cloth defect inspection method based on deep learning |
WO2021005590A1 (en) * | 2019-07-05 | 2021-01-14 | Valerann Ltd. | Traffic event and road condition identification and classification |
CN112084928A (en) * | 2020-09-04 | 2020-12-15 | 东南大学 | Road traffic accident detection method based on visual attention mechanism and ConvLSTM network |
CN112509315A (en) * | 2020-11-04 | 2021-03-16 | 杭州远眺科技有限公司 | Traffic accident detection method based on video analysis |
Non-Patent Citations (3)
Title |
---|
CORE: "聚类kmeans算法在yolov3中的应用", 《HTTPS://WWW.CNBLOGS.COM/SDU20112013/P/10937717.HTML》 * |
PENGYI ZHANG 等: "SlimYOLOv3: Narrower, Faster and Better for Real-Time UAV Applications", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP》 * |
胡海根 等: "基于深层卷积残差网络集成的黑色素瘤分类方法", 《计算机科学》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202733A (en) * | 2022-02-18 | 2022-03-18 | 青岛海信网络科技股份有限公司 | Video-based traffic fault detection method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111353413B (en) | Low-missing-report-rate defect identification method for power transmission equipment | |
CN110188331A (en) | Model training method, conversational system evaluation method, device, equipment and storage medium | |
CN103488993B (en) | A kind of crowd's abnormal behaviour recognition methods based on FAST | |
CN113486726A (en) | Rail transit obstacle detection method based on improved convolutional neural network | |
CN114802296A (en) | Vehicle track prediction method based on dynamic interaction graph convolution | |
CN111428558A (en) | Vehicle detection method based on improved YO L Ov3 method | |
CN114970321A (en) | Scene flow digital twinning method and system based on dynamic trajectory flow | |
CN114550223B (en) | Person interaction detection method and device and electronic equipment | |
CN113037783B (en) | Abnormal behavior detection method and system | |
Khosravi et al. | Crowd emotion prediction for human-vehicle interaction through modified transfer learning and fuzzy logic ranking | |
CN113902007A (en) | Model training method and device, image recognition method and device, equipment and medium | |
CN114202803A (en) | Multi-stage human body abnormal action detection method based on residual error network | |
CN117217368A (en) | Training method, device, equipment, medium and program product of prediction model | |
CN116824335A (en) | YOLOv5 improved algorithm-based fire disaster early warning method and system | |
CN113158835A (en) | Traffic accident intelligent detection method based on deep learning | |
Gorodnichev et al. | Research and Development of a System for Determining Abnormal Human Behavior by Video Image Based on Deepstream Technology | |
CN114529552A (en) | Remote sensing image building segmentation method based on geometric contour vertex prediction | |
CN117152815A (en) | Student activity accompanying data analysis method, device and equipment | |
CN117116048A (en) | Knowledge-driven traffic prediction method based on knowledge representation model and graph neural network | |
CN114494893B (en) | Remote sensing image feature extraction method based on semantic reuse context feature pyramid | |
CN116959099A (en) | Abnormal behavior identification method based on space-time diagram convolutional neural network | |
CN116665390A (en) | Fire detection system based on edge calculation and optimized YOLOv5 | |
CN110163081A (en) | Regional invasion real-time detection method, system and storage medium based on SSD | |
CN115527270A (en) | Method for identifying specific behaviors in intensive crowd environment | |
CN115019039A (en) | Example segmentation method and system combining self-supervision and global information enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210723 |
|
RJ01 | Rejection of invention patent application after publication |