CN111462488A - Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model - Google Patents

Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model Download PDF

Info

Publication number
CN111462488A
CN111462488A CN202010248715.4A CN202010248715A CN111462488A CN 111462488 A CN111462488 A CN 111462488A CN 202010248715 A CN202010248715 A CN 202010248715A CN 111462488 A CN111462488 A CN 111462488A
Authority
CN
China
Prior art keywords
track
intersection
vehicle
data
risk assessment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010248715.4A
Other languages
Chinese (zh)
Other versions
CN111462488B (en
Inventor
陈阳舟
卢佳程
许甜
尹卓
王佐
邓涵月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
CCCC First Highway Consultants Co Ltd
Original Assignee
Beijing University of Technology
CCCC First Highway Consultants Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology, CCCC First Highway Consultants Co Ltd filed Critical Beijing University of Technology
Priority to CN202010248715.4A priority Critical patent/CN111462488B/en
Publication of CN111462488A publication Critical patent/CN111462488A/en
Application granted granted Critical
Publication of CN111462488B publication Critical patent/CN111462488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intersection safety risk assessment method based on a deep convolutional neural network and an intersection behavior feature model, and belongs to the field of intelligent traffic. According to the method, an aerial view camera is used for obtaining image information, a deep convolutional neural network is used for identifying vehicles, the vehicle tracks are analyzed by using behavior characteristics of the vehicles at the intersection, the collision probability among the vehicles is calculated, and finally the safety risk of the intersection is evaluated. The method is suitable for vehicle online safety early warning under the intersection environment, is also suitable for real-time detection and early warning of potential traffic safety hidden dangers, can improve the safety level of intersection operation, and improves the accuracy rate of traffic accident prediction.

Description

Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
Technical Field
The invention relates to the field of intelligent traffic, in particular to an intersection safety risk assessment method.
Background
With the rapid development of economy and improvement of living standard of people in China, the number of private cars is increased rapidly. The increased car ownership not only increases traffic jam, but also increases the number of traffic accidents, and the safety risk of intersections is increased continuously.
The vehicle intersection safety risk assessment method can improve the road management efficiency to a certain extent, so that managers and drivers can obtain the traffic flow, the running state and the risk level of vehicles in time, and the running safety of the vehicles is improved to a certain extent. The existing intersection safety risk assessment method mainly comprises the following steps:
in patent 201810996078.1, the method for evaluating the safety state of a road intersection is mainly based on static parameters of the intersection, and cannot dynamically evaluate the current risk state of the intersection.
In patent 201810063488.0, the trajectory extraction of the method is mainly based on manual calibration, a large number of human objects are required to be consumed to obtain the trajectory, and the risk level of the intersection cannot be reflected in real time.
Patent No. 201810472867.5 discloses a method and system for predicting vehicle collision risk at an intersection based on vehicle-road coordination, which requires vehicle-mounted devices to be mounted, predicts the vehicle trajectory through the vehicle-mounted devices, cannot predict the risk if the vehicle-mounted devices are not mounted, and cannot evaluate the existing intersection in real time if the vehicle-mounted devices are not mounted.
The existing main detection means are often difficult to implement, and the current risk state of the intersection cannot be automatically acquired. So that managers and drivers cannot timely acquire risk information of the intersection, such as whether vehicles are in dangerous driving, whether accidents exist, the traffic flow and the like.
Disclosure of Invention
The invention mainly solves the technical problem of providing an automatic intersection safety risk assessment method, and can solve the problem that the safety and efficiency level of an intersection are difficult to automatically and dynamically acquire.
The method consists of three parts: the system comprises an image detection and track tracking part, a track data preprocessing part and an intersection safety and efficiency analysis part.
The image detection and track tracking part is divided into two sub parts, the image detection part is trained by combining manual calibration data and a Pascal VOC data set based on a YO L Ov3 algorithm, and the training result is applied to the extraction of the position of the vehicle in the picture.
The track tracking part tracks the position of the detected vehicle and gives a vehicle number based on a multi-target tracking algorithm of Kalman filtering, and the number is in an integer form and is determined by the sequence of the vehicle appearing in the picture.
The track data preprocessing part is provided with two sub-parts, the first sub-part is used for cleaning data and comprises noise filtering, semantic segmentation and interesting region discovery, and the second sub-part is used for carrying out clustering operation on tracks and comprises track clustering, merging clustering and broken track processing.
The intersection safety and efficiency analysis part mainly comprises intersection efficiency and safety related data extraction, an intersection conflict point counting part evaluates safety, and a stopping point time counting part evaluates intersection passing efficiency.
The intersection safety risk assessment method provided by the invention is realized on the basis of the above steps, and comprises the following steps:
the image detection part adopts a YO L Ov3 deep convolution neural network algorithm, belongs to the known technology and aims to extract the position of a target vehicle in an image, because vehicle detection has high requirements on accuracy and real-time performance, most of existing two-stage (two-stage) algorithms such as Faster R-CNN and the like need region naming on the image, although the accuracy can be guaranteed, the detection speed is often only about 3fps and cannot meet the requirement of video detection, the original single-stage (one-stage) algorithms such as YO L Ov2 and SSD have the defects of high detection speed, insufficient accuracy and poor small target detection capability and cannot meet the requirement of high-view vehicle track tracking, and the YO L Ov3 algorithm effectively balances the accuracy problem of the deep convolution neural network in the field of target detection.
The YO L Ov3 algorithm adopts a 106-layer darknet-53 network, the network is formed by overlapping residual error units, the network structure is shown in figure 1 and is divided into three scales, namely, prediction one, two and three in figure 1, local feature interaction is realized in each scale through a convolution kernel mode, the basic idea is that a feature extraction network extracts features of an input image to obtain a feature map (feature map) with a certain size such as 13X13, then the input image is divided into 13X13 grid units, then if the central coordinate of an object in marking data (ground route) falls into a certain grid unit, the object is predicted by the grid unit, each grid unit predicts a fixed number of boundary boxes, and the boundary boxes only use the boundary box with the largest intersection ratio (IOU) of the marking data to predict the object.
The method comprises the steps of 1, obtaining training data and training a vehicle recognition model, wherein two types of samples are adopted for combined training in the implementation process, one type is a data set which is intercepted from an intersection high-point video data set and calibrated, and the other type is a passacal VOC2007+2012 open data set, 170 frames are extracted from an intersection high-point video in the experiment, a target detection data set is established by adopting label Img software (figure 2) in a mode that the extracted frames are manually calibrated in a VOC data set format, the target detection data set is combined with the passacal VOC data set to improve the universality of target detection, data with bus and car labels in the passacal VOC data set are screened, the selected passacal VOC data set and the screened passacal VOC data set are mixed according to the proportion of a training set 8:2, the mixed data set is converted into a format accepted by a YO L Ov3 network, the mixed data is sent into the network for training to obtain the network weight, the video is sent into a YO L Ov3 network to be combined with the obtained network weight, the position of a vehicle in a video image is obtained, the position of the vehicle is represented by a left upper corner of a vehicle, and the vehicle is represented by a left corner x vehicle boundary, and the vehicle is represented by a left corner of a vehicle.
Step 2: vehicle detection value extraction: inputting a video image to be analyzed into a trained vehicle identification model to obtain all vehicle positions in each frame of image;
and step 3: tracking a vehicle target: and numbering the vehicle positions obtained in the step 2 by using a Kalman filtering-based multi-target tracking and Hungarian algorithm according to the occurrence time sequence of the vehicles, wherein the positions of the vehicles in all frames and the numbers corresponding to the positions form the trajectory data of each vehicle. The algorithm is described in the following:
firstly, calculating a matching cost matrix by using the intersection and combination ratio of the boundary box and the boundary box predicted by Kalman filtering, wherein the Kalman filtering multi-target tracking algorithm belongs to the known technology, when the matching cost between the predicted boundary box and any one detected boundary box in a picture reaches a certain threshold (for example, the intersection and combination ratio of the frames is more than 0.5), the predicted boundary box can be considered to be tracked, and the predicted boundary box and the detected boundary box with the minimum matching cost are matched by using the Hungarian algorithm, and the Hungarian algorithm belongs to the known technology. When the predicted bounding box and the detected bounding box can be matched, the detected bounding box updates the state (gives a number); if the predicted bounding box is not matched with the upper detection bounding box, continuing predicting the parameters of the predicted target bounding box by using the linear velocity model; if no corresponding detection bounding box is matched with the prediction bounding box within a certain time (set as 25 frames in combination with the traffic detection scene), deleting the prediction bounding box; and finally, outputting the vehicle boundary frame information with the vehicle number in all frames.
And 4, step 4: track data preprocessing: carrying out data cleaning on the track data of all the vehicles obtained in the step (3), clustering the cleaned data, and obtaining path clusters and vehicle stagnation point marks of the tracks, wherein the path is the track from a certain track starting point to a certain track ending point; after the path clusters are generated, sequentially carrying out average operation on all points of all tracks in each path cluster by adopting a sliding window method, thereby generating a plurality of path cluster representatives of the intersection to be analyzed; calculating the similarity between each vehicle track and all the path cluster representations, wherein the path cluster representation with the highest similarity is the label corresponding to the vehicle track;
the input to step 4 is the trajectory acquired by the target tracking algorithm of step 3.
Firstly, performing data cleaning on the track data of all vehicles obtained in step 3, wherein the data cleaning comprises the following steps: extracting the characteristics of the direction of the track points to eliminate noise in the track data; dividing the range of the crosswalk line of the intersection in an artificial calibration mode, and eliminating track data which do not pass through the crosswalk line; and filtering abnormal tracks of which the track starting points and the track ending points are not in the corresponding regions of interest.
The noise filtering is realized by extracting the characteristics of track point direction, and means that the following two aspects of characteristics are counted, firstly, the obvious abnormal track is found out by counting the number of points contained in the track, the sum of position differences among the points, the mean value and the variance, for example, the track with the time less than 1 second (tracking error) or the track with the start point and the end point at the same position (stopped vehicle) is only found out. Secondly, the direction of the next track point in each track at the current track point is counted, and the abnormal track is found out by counting the points with greatly changed direction characteristics.
The interested areas comprise a track starting area (namely a first point of the track), a track ending area (namely a last point of the track) and a vehicle stagnation point area, and are obtained by clustering vehicle tracks processed in the two steps before data cleaning in a DBSCAN mode, wherein the interested areas are shown in figure 5, an A/B/C/D inlet area is a track starting point clustering result, an E/F/G/H/I outlet area is a track ending point clustering result, and a J/K/L/M stagnation area is a vehicle stagnation point clustering result.
And clustering the track data after data cleaning by adopting a DBSCAN clustering algorithm to obtain path clusters and vehicle stagnation point marks of the tracks, wherein the path clusters and the vehicle stagnation point marks are obtained according to the starting points and the ending points of the tracks, the paths are tracks from a certain track starting point to a certain track ending point, each path is simply and sequentially represented by the number of the entering outlet area, if the path enters from the A inlet area and leaves from the E outlet area, the path number is 1, and the path clusters are a set of tracks complying with the path behavior mode. The vehicle stagnation point is a stagnation point obtained by clustering a single track in a DBSCAN clustering mode, and the information of the vehicle stagnation point is not used in the subsequent step of step 4 and is reserved to be used in the online mode of step 5.
After the path clusters are generated, a path cluster representation for each path cluster is also extracted. The extraction process represented by the path cluster is to compress the tracks in the path cluster, then average each point of all compressed tracks in the path cluster in sequence by adopting a sliding window method to generate a new track, and the new track is considered to represent all tracks in the track path cluster. The track compression is used for simplifying calculation, namely extracting a small number of points from the track, performing gridding expression, and classifying the track points into squares in a picture grid. The sliding window method is well known in the art.
And finally, calculating the similarity of the tracks, namely calculating the similarity between all the tracks and all the path cluster representations, and adding corresponding path number labels to the track data of each vehicle, wherein the similarity is the L CSS distance of the tracks, and when the distance between a certain track and the L CSS of a certain path cluster representation is smaller than that of other path cluster representations, the track is considered to belong to the path cluster.
And 5: and extracting relevant data of intersection efficiency and safety, wherein the intersection efficiency comprises statistical intersection traffic flow, intersection passing time and total intersection delay time. The statistical intersection traffic flow is obtained by calculating the number of tracks in each path cluster of the intersection, and belongs to the known technology. The intersection passing time is obtained by calculating and counting the difference value between the intersection entering time and the intersection leaving time of the track in each path cluster, and the method belongs to the known technology. And (4) counting the total delay time of the intersection, wherein the total delay time is obtained by counting the dead time, and the dead time is obtained by counting the vehicle stagnation point marks obtained by clustering in the step 4. The specific parameter extraction results are shown in fig. 7.
The conflict event refers to: when the tracks of the two vehicles do not belong to the same path cluster and the distance in the image is smaller than a certain threshold value, predicting the post-invasion time, and when the post-invasion time between the two vehicles is smaller than the threshold value A, counting that A is larger than or equal to 0s and smaller than or equal to 2s as a primary collision event.
Specifically, extracting a collision event of a vehicle refers to searching a newly acquired track in Time, if track labels of two vehicles are different, the two vehicles are not considered to belong to the same path cluster, and the distance in an image is smaller than a certain threshold (artificially set according to actual conditions), then predicting Post intrusion Time (PET), and if the Post intrusion Time between the two vehicles is smaller than a certain threshold (0, 2) seconds, counting as a collision event. And (3) recording the track numbers, time and positions of the two vehicles of the collision events and PET values to a database, finally counting the time, the position and the frequency of the collision events, and presenting the extracted event frequency in a thermodynamic diagram manner in unit time, wherein the result is shown in FIG. 8.
The post-intrusion time (PET), which is defined as "the time required for a preceding vehicle to leave a collision zone and for a subsequent vehicle to reach the collision zone". The smaller the PET value is, the more the collision vehicles are, the more dangerous the collision event is, the higher the collision probability is, and the PET concept and the calculation mode of the PET value belong to the known technology.
And the step of predicting the post-invasion time is to compress the newly acquired track to the grid by adopting track compression in the step 4, count the time required by the post-vehicle to reach the grid of the front vehicle, and predict the required time by adopting an XGboost model as shown in FIG. 9.
And the step of calculating the collision probability refers to mapping the post-invasion time obtained by the track collision prediction to a (0,1) interval by means of nonlinear transformation to serve as the collision probability, and counting the collision probability of each path in unit time.
Advantageous effects
The image detection method and the target tracking method of the deep convolutional neural network have high real-time performance, the image detection speed reaches 23fps, the target tracking speed reaches 110fps, and the real-time performance requirement of intersection safety risk assessment is met, so that the method can be effectively applied in practice.
Compared with the existing manual statistics method, the intersection efficiency and safety related data extraction method greatly saves manpower and material resources, and can quickly evaluate the intersection efficiency and output corresponding evaluation indexes.
The high-point monitoring video adopted by the invention has wide applicability, only one camera is needed at a single intersection, and for a traffic management party, the intersection safety risk assessment method only needs to get through the data of the existing camera and does not need to newly add monitoring equipment.
Drawings
FIG. 1 is a schematic structural diagram of a YO L Ov3 deep convolutional neural network;
FIG. 2 is a schematic illustration of manual calibration of vehicle data using labelImg software;
FIG. 3 is a schematic diagram of the detection of images using the YO L Ov3 algorithm;
FIG. 4 is a schematic diagram of a vehicle tracking using a Kalman filtering algorithm;
FIG. 5 is a graph showing the results of clustering access points;
FIG. 6 is a diagram illustrating the results of path clustering;
FIGS. 7(a) - (d) are schematic diagrams of final output intersection safety efficiency analysis;
FIGS. 8(a1) - (c3) are examples of the spatial distribution of the last output collision events;
FIG. 9 is a schematic diagram of an example PET temporal prediction;
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
In an embodiment, the configuration is selected to be a CPU: i73720 QM, memory: 16G, display card: the desktop of GTX1080TI acts as a training and testing machine. The input of the implementation case is a high point monitoring video of a certain intersection of the Western Ann, and the specific steps of the implementation case are as follows:
step 1: as shown in fig. 1, the labelImg software is used to manually calibrate the vehicles at the intersection, so as to obtain the manually calibrated vehicle target frame shown in fig. 2.
The data set from the manual calibration is merged with the vehicle portion of the past VOC data set.
And sending the combined data set into a YO L Ov3 network for training to obtain network parameters.
Step 2: and detecting the video by using the obtained network parameters as the network parameters during detection to obtain a boundary frame of the position of the vehicle in the picture, wherein the square frame around the vehicle in the figure 3 is the boundary frame, and the information of the time, the position, the width and the like of the boundary frame is stored in the data.txt in a corresponding format.
And step 3: and inputting the data.txt into sort.py, tracking the boundary frame of the vehicle to obtain the boundary frame of the vehicle with the vehicle number, classifying the information in different frames according to the vehicle number by using pandasdata.py, and storing the classified information in the trajall.pkl file, wherein the numbers on the boundary frame around the vehicle in the figure 4 are the vehicle numbers obtained after tracking.
And 4, performing feature extraction in the step 4 data cleaning on the track by using a track preprocessing. py file, removing an abnormal track, performing semantic segmentation on the track in the step 4 data cleaning by using a track noise filtering. py, and finding an in-out point interest region by using the track roi. py, wherein an A/B/C/D region is a track starting point clustering result, an E/F/G/H/I region is a track ending point clustering result, and a J/K/L/M region is a vehicle stagnation point clustering result.
And (4) clustering the track after the abnormal track is eliminated by using a track clustering part according to the step 4, wherein 22 paths of the intersection are clustered to represent in the graph 6, all the complete tracks are classified into the 22 paths, and the abnormal track is independently proposed and eliminated.
And 5: conflict points were extracted and statistically plotted as shown in fig. 7 using trajectoryconflictpoints. py, and the distribution of stagnation points within 5 minutes after 10:00 days at the intersection is shown in fig. 7(a) - (d). The dead time on each path is obtained simultaneously with obtaining the dead point information. And (5) calculating the conflict points according to the step 5 to obtain the statistics of the number of conflicts in the intersection in fig. 8, wherein the statistics are represented by thermodynamic diagrams, fig. 8(a1) - (a3) are thermodynamic diagrams of the number of conflicts at the intersection, and fig. 8(b1) - (b3) and fig. 8 (c1) - (c3) are thermodynamic diagrams of the number of conflicts at other intersections generated in the same way.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. An intersection safety risk assessment method based on a deep convolutional neural network and intersection behavior characteristic modeling is characterized by comprising the following steps:
step 1: acquiring training data and training a vehicle identification model;
step 2: vehicle detection value extraction: inputting a video image to be analyzed into a trained vehicle identification model to obtain all vehicle positions in each frame of image;
and step 3: tracking a vehicle target: numbering the vehicle positions obtained in the step 2 according to the time sequence of each frame, wherein the positions of each vehicle in all the frames and the numbers corresponding to the positions form track data of each vehicle;
and 4, step 4: track data preprocessing: carrying out data cleaning on the track data of all the vehicles obtained in the step (3), clustering the cleaned data, and obtaining path clusters and vehicle stagnation point marks of the tracks, wherein the path is the track from a certain track starting point to a certain track ending point; after the path clusters are generated, sequentially carrying out average operation on all points of all tracks in each path cluster by adopting a sliding window method, thereby generating a plurality of path cluster representatives of the intersection to be analyzed; calculating the similarity between each vehicle track and all the path cluster representations, wherein the path cluster representation with the highest similarity is the label corresponding to the vehicle track;
and 5: extracting intersection efficiency and safety data: the traffic efficiency data comprises statistical intersection traffic flow, passing time and total delay time, wherein the total delay time refers to the sum of the dead time of all the dead points on one path; safety data refers to collision probability.
2. The safety risk assessment method of claim 1, wherein the vehicle identification model is a YO L Ov3 deep convolutional network model.
3. The security risk assessment method of claim 1, wherein: and 3, tracking the vehicle target by adopting Kalman filtering and Hungarian algorithm.
4. The security risk assessment method of claim 1, wherein: the data cleaning comprises the following steps: extracting the characteristics of the direction of the track points to eliminate noise in the track data; dividing the range of the crosswalk line of the intersection in an artificial calibration mode, and eliminating track data which do not pass through the crosswalk line; and filtering abnormal tracks of which the track starting points and the track ending points are not in the corresponding regions of interest.
5. The security risk assessment method of claim 4, wherein: the region-of-interest extraction refers to clustering a track starting point, a track ending point and a stagnation point in a DBSCAN mode to respectively obtain a track starting area, a track ending area and a track stagnation area.
6. The security risk assessment method according to claim 1, wherein the similarity in step 4 is L CSS distance, and the clustering method is DBSCAN clustering algorithm.
7. The security risk assessment method of claim 1, wherein: the total delay time calculation in the step 5 is realized in a mode of counting the number of stagnation points, and the stagnation points are extracted in a clustering mode; the calculation of the internal conflict of the intersection is realized in a mode of predicting the post-invasion time and is presented in a thermodynamic diagram mode.
8. The security risk assessment method of claim 1, wherein: the method for calculating the collision probability specifically comprises the following steps: counting the number of the representative collision events of each path cluster, thereby obtaining the collision probability of each path cluster in the representative unit time; the collision event means that when the tracks of two vehicles do not belong to the same path cluster and the distance in the image is smaller than a certain threshold value, the subsequent intrusion time is predicted, and when the subsequent intrusion time between the two vehicles is smaller than a threshold value A, A is larger than or equal to 0s and smaller than or equal to 2s, and the event is counted as a collision event.
9. The security risk assessment method of claim 8, wherein: the predicted post-invasion time was modeled using the XGboost.
CN202010248715.4A 2020-04-01 2020-04-01 Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model Active CN111462488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010248715.4A CN111462488B (en) 2020-04-01 2020-04-01 Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010248715.4A CN111462488B (en) 2020-04-01 2020-04-01 Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model

Publications (2)

Publication Number Publication Date
CN111462488A true CN111462488A (en) 2020-07-28
CN111462488B CN111462488B (en) 2021-09-10

Family

ID=71681156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010248715.4A Active CN111462488B (en) 2020-04-01 2020-04-01 Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model

Country Status (1)

Country Link
CN (1) CN111462488B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489456A (en) * 2020-12-01 2021-03-12 山东交通学院 Signal lamp regulation and control method and system based on urban trunk line vehicle queuing length
CN112836586A (en) * 2021-01-06 2021-05-25 北京嘀嘀无限科技发展有限公司 Intersection information determination method, system and device
CN113033443A (en) * 2021-03-31 2021-06-25 同济大学 Unmanned aerial vehicle-based automatic pedestrian crossing facility whole road network checking method
CN113033893A (en) * 2021-03-23 2021-06-25 同济大学 Method for predicting running time of automatic guided vehicle of automatic container terminal
CN113283653A (en) * 2021-05-27 2021-08-20 大连海事大学 Ship track prediction method based on machine learning and AIS data
CN113313957A (en) * 2021-05-30 2021-08-27 南京林业大学 Signal lamp-free intersection vehicle scheduling method based on enhanced Dijkstra algorithm
CN113375685A (en) * 2021-03-31 2021-09-10 福建工程学院 Urban intersection center identification and intersection turning rule extraction method based on sub-track intersection
CN114299456A (en) * 2021-12-24 2022-04-08 北京航空航天大学 Intersection pedestrian crossing risk assessment method based on real-time track detection
CN114781791A (en) * 2022-03-11 2022-07-22 山东高速建设管理集团有限公司 High-speed service area risk identification method based on holographic sensing data
CN114822044A (en) * 2022-06-29 2022-07-29 山东金宇信息科技集团有限公司 Driving safety early warning method and device based on tunnel
CN114926984A (en) * 2022-05-17 2022-08-19 华南理工大学 Real-time traffic conflict collection and road safety evaluation method
CN116153078A (en) * 2023-04-14 2023-05-23 健鼎(无锡)电子有限公司 Road safety assessment method and device based on millimeter wave radar and storage medium
CN117012055A (en) * 2023-08-14 2023-11-07 河南新电信息科技有限公司 Intelligent early warning system and method for right dead zone of dangerous goods transport vehicle
CN117708260A (en) * 2024-02-02 2024-03-15 中宬建设管理有限公司 Smart city data linkage updating method and system
CN118015844A (en) * 2024-04-10 2024-05-10 成都航空职业技术学院 Traffic dynamic control method and system based on deep learning network
CN118171781A (en) * 2024-05-13 2024-06-11 东南大学 Expressway motor vehicle accident intelligent detection method and system based on real-time track prediction

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
CN108710879A (en) * 2018-04-20 2018-10-26 江苏大学 A kind of pedestrian candidate region generation method based on Grid Clustering Algorithm
CN108734103A (en) * 2018-04-20 2018-11-02 复旦大学 The detection of moving target and tracking in satellite video
CN109102678A (en) * 2018-08-30 2018-12-28 青岛联合创智科技有限公司 A kind of drowned behavioral value method of fusion UWB indoor positioning and video object detection and tracking technique
KR101972055B1 (en) * 2018-11-30 2019-04-24 한국가스안전공사 CNN based Workers and Risky Facilities Detection System on Infrared Thermal Image
US10300851B1 (en) * 2018-10-04 2019-05-28 StradVision, Inc. Method for warning vehicle of risk of lane change and alarm device using the same
CN110634291A (en) * 2019-09-17 2019-12-31 武汉中海庭数据技术有限公司 High-precision map topology automatic construction method and system based on crowdsourcing data
CN110660220A (en) * 2019-10-08 2020-01-07 五邑大学 Urban rail train priority distribution method and system
CN110852243A (en) * 2019-11-06 2020-02-28 中国人民解放军战略支援部队信息工程大学 Improved YOLOv 3-based road intersection detection method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
CN108710879A (en) * 2018-04-20 2018-10-26 江苏大学 A kind of pedestrian candidate region generation method based on Grid Clustering Algorithm
CN108734103A (en) * 2018-04-20 2018-11-02 复旦大学 The detection of moving target and tracking in satellite video
CN109102678A (en) * 2018-08-30 2018-12-28 青岛联合创智科技有限公司 A kind of drowned behavioral value method of fusion UWB indoor positioning and video object detection and tracking technique
US10300851B1 (en) * 2018-10-04 2019-05-28 StradVision, Inc. Method for warning vehicle of risk of lane change and alarm device using the same
KR101972055B1 (en) * 2018-11-30 2019-04-24 한국가스안전공사 CNN based Workers and Risky Facilities Detection System on Infrared Thermal Image
CN110634291A (en) * 2019-09-17 2019-12-31 武汉中海庭数据技术有限公司 High-precision map topology automatic construction method and system based on crowdsourcing data
CN110660220A (en) * 2019-10-08 2020-01-07 五邑大学 Urban rail train priority distribution method and system
CN110852243A (en) * 2019-11-06 2020-02-28 中国人民解放军战略支援部队信息工程大学 Improved YOLOv 3-based road intersection detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尹卓: "面向交叉口冲突分析的车辆轨迹学习技术", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489456A (en) * 2020-12-01 2021-03-12 山东交通学院 Signal lamp regulation and control method and system based on urban trunk line vehicle queuing length
CN112836586A (en) * 2021-01-06 2021-05-25 北京嘀嘀无限科技发展有限公司 Intersection information determination method, system and device
CN113033893A (en) * 2021-03-23 2021-06-25 同济大学 Method for predicting running time of automatic guided vehicle of automatic container terminal
CN113033443B (en) * 2021-03-31 2022-10-14 同济大学 Unmanned aerial vehicle-based automatic pedestrian crossing facility whole road network checking method
CN113375685A (en) * 2021-03-31 2021-09-10 福建工程学院 Urban intersection center identification and intersection turning rule extraction method based on sub-track intersection
CN113033443A (en) * 2021-03-31 2021-06-25 同济大学 Unmanned aerial vehicle-based automatic pedestrian crossing facility whole road network checking method
CN113283653B (en) * 2021-05-27 2024-03-26 大连海事大学 Ship track prediction method based on machine learning and AIS data
CN113283653A (en) * 2021-05-27 2021-08-20 大连海事大学 Ship track prediction method based on machine learning and AIS data
CN113313957A (en) * 2021-05-30 2021-08-27 南京林业大学 Signal lamp-free intersection vehicle scheduling method based on enhanced Dijkstra algorithm
CN114299456A (en) * 2021-12-24 2022-04-08 北京航空航天大学 Intersection pedestrian crossing risk assessment method based on real-time track detection
CN114299456B (en) * 2021-12-24 2024-05-31 北京航空航天大学 Intersection pedestrian crossing risk assessment method based on real-time track detection
CN114781791A (en) * 2022-03-11 2022-07-22 山东高速建设管理集团有限公司 High-speed service area risk identification method based on holographic sensing data
CN114781791B (en) * 2022-03-11 2023-09-29 山东高速建设管理集团有限公司 High-speed service area risk identification method based on holographic perception data
CN114926984A (en) * 2022-05-17 2022-08-19 华南理工大学 Real-time traffic conflict collection and road safety evaluation method
CN114926984B (en) * 2022-05-17 2024-06-25 华南理工大学 Real-time traffic conflict collection and road safety evaluation method
CN114822044A (en) * 2022-06-29 2022-07-29 山东金宇信息科技集团有限公司 Driving safety early warning method and device based on tunnel
CN114822044B (en) * 2022-06-29 2022-09-09 山东金宇信息科技集团有限公司 Driving safety early warning method and device based on tunnel
CN116153078A (en) * 2023-04-14 2023-05-23 健鼎(无锡)电子有限公司 Road safety assessment method and device based on millimeter wave radar and storage medium
CN117012055A (en) * 2023-08-14 2023-11-07 河南新电信息科技有限公司 Intelligent early warning system and method for right dead zone of dangerous goods transport vehicle
CN117708260A (en) * 2024-02-02 2024-03-15 中宬建设管理有限公司 Smart city data linkage updating method and system
CN117708260B (en) * 2024-02-02 2024-04-26 中宬建设管理有限公司 Smart city data linkage updating method and system
CN118015844A (en) * 2024-04-10 2024-05-10 成都航空职业技术学院 Traffic dynamic control method and system based on deep learning network
CN118015844B (en) * 2024-04-10 2024-06-11 成都航空职业技术学院 Traffic dynamic control method and system based on deep learning network
CN118171781A (en) * 2024-05-13 2024-06-11 东南大学 Expressway motor vehicle accident intelligent detection method and system based on real-time track prediction

Also Published As

Publication number Publication date
CN111462488B (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN111462488B (en) Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN112508392B (en) Dynamic evaluation method for traffic conflict risk of hidden danger road section of mountain area double-lane highway
CN101729872B (en) Video monitoring image based method for automatically distinguishing traffic states of roads
CN114170580B (en) Expressway-oriented abnormal event detection method
CN110738857B (en) Vehicle violation evidence obtaining method, device and equipment
US20120093398A1 (en) System and method for multi-agent event detection and recognition
CN113111838B (en) Behavior recognition method and device, equipment and storage medium
CN113553916B (en) Orbit dangerous area obstacle detection method based on convolutional neural network
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN113155173A (en) Perception performance evaluation method and device, electronic device and storage medium
CN114612860A (en) Computer vision-based passenger flow identification and prediction method in rail transit station
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN112070051A (en) Pruning compression-based fatigue driving rapid detection method
CN103679214A (en) Vehicle detection method based on online area estimation and multi-feature decision fusion
CN117456482B (en) Abnormal event identification method and system for traffic monitoring scene
Gupta et al. Computer vision based animal collision avoidance framework for autonomous vehicles
CN111695545A (en) Single-lane reverse driving detection method based on multi-target tracking
CN109389177B (en) Tunnel vehicle re-identification method based on cooperative cascade forest
CN107562900A (en) Method and system for analyzing airfield runway foreign matter based on big data mode
Zheng et al. Toward real-time congestion measurement of passenger flow on platform screen doors based on surveillance videos analysis
CN116229396B (en) High-speed pavement disease identification and warning method
CN117116046A (en) Traffic common event detection method based on single-stage target detection
CN112562315A (en) Method, terminal and storage medium for acquiring traffic flow information
CN115072510A (en) Elevator car passenger intelligent identification and analysis method and system based on door switch
Liu et al. Metro passenger flow statistics based on YOLOv3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Xu Tian

Inventor after: Deng Hanyue

Inventor after: Wang Zuo

Inventor after: Chen Yangzhou

Inventor after: Lu Jiacheng

Inventor after: Yin Zhuo

Inventor before: Chen Yangzhou

Inventor before: Lu Jiacheng

Inventor before: Xu Tian

Inventor before: Yin Zhuo

Inventor before: Wang Zuo

Inventor before: Deng Hanyue

CB03 Change of inventor or designer information
TR01 Transfer of patent right

Effective date of registration: 20220523

Address after: 710075 No. two, No. 63, hi tech Zone, Shaanxi, Xi'an

Patentee after: CCCC FIRST HIGHWAY CONSULTANTS Co.,Ltd.

Patentee after: Beijing University of Technology

Address before: 100124 No. 100 Chaoyang District Ping Tian Park, Beijing

Patentee before: Beijing University of Technology

Patentee before: China First Highway Survey and Design Institute Co., Ltd.

TR01 Transfer of patent right