CN113762473B - Complex scene driving risk prediction method based on multi-time space diagram - Google Patents

Complex scene driving risk prediction method based on multi-time space diagram Download PDF

Info

Publication number
CN113762473B
CN113762473B CN202110979340.3A CN202110979340A CN113762473B CN 113762473 B CN113762473 B CN 113762473B CN 202110979340 A CN202110979340 A CN 202110979340A CN 113762473 B CN113762473 B CN 113762473B
Authority
CN
China
Prior art keywords
vehicle
time
space
collision
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110979340.3A
Other languages
Chinese (zh)
Other versions
CN113762473A (en
Inventor
熊晓夏
蔡英凤
高翔
王海
刘擎超
沈钰杰
陈龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202110979340.3A priority Critical patent/CN113762473B/en
Publication of CN113762473A publication Critical patent/CN113762473A/en
Application granted granted Critical
Publication of CN113762473B publication Critical patent/CN113762473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a complex scene driving risk prediction method based on a multi-space-time diagram, which is used for constructing a multi-space-time diagram describing different space-time relations among vehicles in a peripheral multi-vehicle complex scene, inputting the fused multi-space-time diagram into a diagram convolutional neural network and extracting multi-space-time diagram feature vectors of the scene; taking the multi-time space diagram feature vector sequences extracted at each moment in the observation period as multi-step input features of the long-period memory neural network, and training multi-time space sequence samples in different risk states to obtain a driving risk prediction model; and acquiring motion information of the self-vehicle and the surrounding multi-vehicle in real time, extracting a multi-space-time diagram feature vector sequence among all vehicles in an observation period in real time, inputting the driving risk prediction model, and finally obtaining a multi-workshop collision risk state prediction result at a future moment. The method is suitable for the problem of multi-workshop collision risk prediction in the peripheral multi-vehicle complex scene, and improves the accuracy and practicability of the prediction model.

Description

Complex scene driving risk prediction method based on multi-time space diagram
Technical Field
The invention relates to the technical field of traffic safety evaluation and intelligent traffic system active safety, in particular to a complex scene driving risk prediction method based on a multi-time space diagram.
Background
The collision risk estimation algorithm is one of the cores of the intelligent automobile active safety technology, and the performance of the risk estimation algorithm directly determines the timeliness and reliability of system early warning or active intervention, so that the collision risk estimation algorithm is also one of the important contents of the current researches of automobile manufacturers and scientific researchers. The traditional collision risk estimation algorithm quantifies the risk levels of different driving conditions mainly through indexes representing initial collision states, namely, calculates risk estimation indexes based on two workshop motion parameters at the initial time of a scene, and divides the risk degree of the scene by comparing the calculated values of the risk estimation indexes with preset values representing different risk levels. The indexes are mainly classified into three categories, namely a distance-based parameter, a time-based parameter and a deceleration-based parameter, such as critical early warning distance, critical braking distance, collision time and reciprocal thereof, workshop time, post-intrusion time, deceleration required by collision avoidance and the like. However, these indexes are generally only suitable for predicting collision risk of two workshops in a specific longitudinal or transverse scene, and generally do not consider the problem of multi-workshop collision risk in a peripheral multi-vehicle complex scene which needs to be faced in the actual driving process, so that the method is not beneficial to application of the method in the actual complex driving scene. Meanwhile, since the collision accident is a small probability event, the multi-workshop collision risk sample is difficult to acquire and divide the state, so that the multi-workshop collision risk prediction model is more difficult to construct. Therefore, it is necessary to study a driving risk prediction method capable of fully considering the characteristics of complex surrounding multi-vehicle interaction scenes.
Disclosure of Invention
In view of the above, the invention provides a complex scene driving risk prediction method based on a multi-space-time diagram.
The present invention achieves the above technical object by the following means.
A complex scene driving risk prediction method based on a multi-space-time diagram comprises the following steps:
s1, taking a self vehicle and a peripheral multi-vehicle at a certain moment as nodes in a graph, taking vehicle positions, speeds and accelerations as node characteristics, constructing a node adjacent matrix reflecting different time-space relations among vehicles, obtaining a multi-time-space graph describing a complex scene of the peripheral multi-vehicle by utilizing the nodes, the node characteristics and the node adjacent matrix in the graph, inputting the fused multi-time-space graph into a graph convolutional neural network, and extracting a multi-time-space graph characteristic vector of the scene; taking the multi-time space diagram feature vector sequences extracted at each moment in the observation period as multi-step input features of the long-period memory neural network, and training multi-time space sequence samples in different risk states to obtain a driving risk prediction model;
s2, acquiring motion information of the self-vehicle and the surrounding multi-vehicle in real time, extracting a multi-space-time diagram feature vector sequence among all vehicles in an observation period in real time, inputting the driving risk prediction model, and finally obtaining a multi-workshop collision risk state prediction result at a future moment.
In the above technical solution, the node adjacency matrix includes a time relation matrix based on collision time TTC and a space relation matrix based on parking sight distance SSD.
In the above technical solution, the time relation matrix based on the collision time TTC specifically includes:
1) Computing multiple cars C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Time to collision TTC between i-j
Wherein: TTC (TTC) xi-j TTC for longitudinal time to collision yi-j For transverse collision time d xi-j Vehicle C with longitudinal axis x i Centroid relative to vehicle C j Relative longitudinal distance of centroid, d yi-j Vehicle C with transverse axis y direction i Centroid relative to vehicle C j Relative lateral distance of centroid, V xi And V yi For vehicle C i Absolute longitudinal speed, absolute transverse speed, L i And L j Respectively C i 、C j Vehicle length W of (2) i And W is j Respectively C i 、C j Epsilon of the vehicle width of (2) x And epsilon y Is a random bias term;
2) When TTC is i-j When the collision time is less than 0, the corresponding collision time index TTC 'is set' i-j Taking infinity:
3) Based on TTC' i-j Constructing multiple vehicles C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Time relation index TD between i-j
Wherein TD xi-j As a longitudinal time relation index, TD yi-j As the transverse time relation index, sigma x Sum sigma y Is a normalized constant for the longitudinal and transverse directions;
4) By time relation index TD xi-j 、TD yi-j Respectively constructing a longitudinal time relation adjacency matrix A for weight Tx And a transverse time relation adjacency matrix A Ty
Wherein said A Tx 、A Ty Are symmetrical matrices, and the main diagonal element is 0, and if the vehicle C i And C j The positions of the adjacent matrixes are not adjacent in the multi-scene, and the elements in the corresponding adjacent matrixes are 0;
5) Obtaining a multi-vehicle scene { O: C 0 ,C 1 ,...,C 6 Time relationship undirected graph G Tx And G Ty
G Tx =(V Tx ,E Tx )G Ty =(V Ty ,E Ty )
Wherein the weights of the two time relation undirected graphs are respectively A Tx And A Ty Node V Tx =V Ty ={C 0 ,C 1 ,...,C 6 Edge E Tx =E Ty ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 }。
In the above technical solution, the spatial relationship matrix based on the parking sight distance SSD specifically includes:
1) Computing multiple cars C 0 ,C 1 ,...,C 6 Each vehicle C i Is (are) parking sight distance SSD i
Wherein SSD xi SSD for longitudinal parking line of sight yi For transverse parking sight distance, V xi And V yi Respectively vehicle C i Absolute longitudinal speed, absolute transverse speed, f x And f y The longitudinal friction coefficient and the transverse friction coefficient are respectively, g is the gravity acceleration, t r Reaction time for the driver;
2) According to multiple vehicles C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Relative position relation between two vehicles is calculated to calculate collision distance SDI of two vehicles i-j
Wherein SDI xi-j SDI for longitudinal impact distance yi-j Is the transverse collision distance;
3) Based on the collision distance SDI i-j Constructing multiple vehicles C 0 ,C 1 ,…,C 6 Medium adjacent vehicle C i And C j Index SD of spatial relationship between i-j
Wherein SD is xi-j SD as a longitudinal spatial relationship index yi-j Is a transverse spatial relationship index;
4) By spatial relationship index SD xi-j 、SD yi-j For the weight, respectively constructing a longitudinal space relation adjacency matrix A Sx And a lateral spatial relationship adjacency matrix A Sy
Wherein said A Sx And A Sy Are symmetrical matrices, and the main diagonal element is 0, and if the vehicle C i And C j The positions of the adjacent matrixes are not adjacent in the multi-scene, and the elements in the corresponding adjacent matrixes are 0;
5) Pair A Sx And A Sy Performing standardization treatment
A′ Sx =D(A Sx ) -1 A Sx
A′ Sy =D(A Sy ) -1 A Sy
Wherein D (·) is a normalization coefficient function, and:
wherein A is i,j The elements corresponding to the ith row and the jth column in the matrix A;
6) Obtaining a multi-vehicle scene { O: C 0 ,C 1 ,...,C 6 Spatial relationship undirected graph G Sx And G Sy
G Sx =(V Sx ,E Sx )
G Sy =(V Sy ,E Sy )
Wherein the weights of the two space relation undirected graphs are respectively A' Sx And A' Sy Node V Sx =V Sy ={C 0 ,C 1 ,...,C 6 Edge E Sx =E Sy ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 }。
In the above technical scheme, the process of fusing the multi-time space diagram is as follows: will G Tx 、G Ty 、G Sx And G Sy According to the weight vector (W Tx ,W Ty ,W Sx ,W Sy ) And (5) carrying out weighted fusion:
A f =W Tx A Tx +W Ty A Ty +W Sx A′ Sx +W Sy A′ Sy
wherein W is Tx ,W Ty ,W Sx ,W Sy E (0, 1) is a self-learning space-time diagram weight coefficient and satisfies the relation W Tx +W Ty +W Sx +W Sy =1; finally obtaining the multi-vehicle scene { O: C ] 0 ,C 1 ,...,C 6 Mixed spatiotemporal relationship graph G f
G f =(V f ,E f )
Wherein: the weight of the mixed space-time relation diagram is A f Node V f ={C 0 ,C 1 ,...,C 6 The edge ef= { C0C1, C0C2, C0C3, C0C4, C0C5, C0C6, C1C3, C1C5, C2C4, C2C6, C3C4, C5C6}, the node characteristic vector is F i =(d xi-0 ,d yi-0 ,V xi ,V yi ,a xi ,a yi ) I=0, 1, 6; combining the node features to obtain a mixed space-time relation graph G f Is a feature matrix of the initial graph of (a):
in the above technical solution, the feature propagation rule of each layer of network graph of the graph convolution neural network is:
H l+1 =σ[D(A f ) -1 A f H l W l +H l B l ]
wherein: sigma is a Sigmoid activation function; h l The layer 0 graph features are initial graph features; w (W) l A first layer convolution weight matrix which can be self-learned; b (B) l Is a self-learning weight matrix;
the k layers of features and the initial graph features are connected in series to obtain a multi-vehicle scene { O: C } 0 ,C 1 ,...,C 6 Multi-space diagram joint feature vector: h= (H0, H 1 ,...,H k )。
In the above technical solution, the acquiring process of the multi-vehicle space sequence sample input in different risk states includes:
(1) Based on historical driving track data of the vehicle, acquiring transverse relative distance, longitudinal relative distance, transverse speed, longitudinal speed, transverse acceleration and longitudinal acceleration of each vehicle at each sampling point in a time window of T multiplied by 0.1s until an observation time point T; t represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in a time window of T multiplied by 0.1s until an observation time point T;
(2) Calculating multi-space-time diagram combined feature vectors of multi-vehicle scenes at all sampling points, and recording H t For the multi-space-time diagram joint feature vector of a multi-vehicle scene at a sampling point t, a time sequence X is obtained t ={H t-T+1 ,H t-T+2 ,...,H t The input is as a multi-car space sequence sample.
In the above technical solution, the method for determining the output state of the multi-vehicle space sequence sample in different risk states includes:
(1) Acquiring a peripheral vehicle C within a time window of T' ×0.1s in the future from an observation time point T based on vehicle history travel locus data i With the bicycle C 0 Longitudinal collision distance index set { SDI } xi-0 (t+1),SDI xi-0 (t+2),...,,SDI xi-0 (t+t') }, i=1, 2, …,6; t 'represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in a time window of T' x 0.1s in the future from the observation time point T;
(2) Computing vehicle C 0 With vehicle C i Collision probability index in future T'. Times.0.1 s time window
Wherein R is 0 (i) Is SDI in time window xi-0 The number of observation points is less than or equal to 0;
(3) Computing vehicle C 0 With vehicle C i Collision severity index over future T'. Times.0.1 s time window
Wherein SDI is max (i) Is SDI in time window xi-0 Observed value with maximum absolute value under condition of less than or equal to 0, SDI cri Is SDI xi-0 Maximum possible value of absolute value under the condition of less than or equal to 0;
(4) By the own vehicle and the surrounding vehicles C i Is the basic event of collision probability and severity of collision of the own vehicle with the surrounding individual vehicles C i The collision risk between the two events is an intermediate event, the total collision risk of the own vehicle and the surrounding multiple scenes is taken as a top event, and a system risk index of the total collision risk top event in a time window of T' x 0.1s in the future is calculated:
wherein eta i =P c (i)·S c (i) For future T' ×0.1s time window, self-vehicle and surrounding vehicle C i The product of the collision probability and the collision severity of (2), a constant k i =0 or 1, taking different values according to the lane keeping or changing behavior of the own vehicle in the time window;
(5) Comparing the system risk index value with a risk threshold value of a corresponding type to determine the output state of a multi-workshop collision risk sample; the magnitude of the risk state threshold can be determined and adjusted according to the sorting percentage of the system risk index values.
In the above technical solution, the driving risk prediction model is a driving risk GCN-LSTM prediction model based on a multi-time space diagram, specifically, the number of layers of the graph rolling neural network, the number of hidden layers of the LSTM, the number of hidden layer nodes, the random inactivation Dropout rate, the L2 regularization coefficient, and the learning rate attenuation coefficient are taken as super parameters of the prediction model, the convolution weight matrix in the graph rolling neural network, the input weight matrix and the bias term in the LSTM memory unit are taken as training parameters of the prediction model, and the driving risk GCN-LSTM prediction model based on the multi-time space diagram is finally trained based on multi-time space sequence samples in different risk states.
The beneficial effects of the invention are as follows:
(1) The method simulates the dynamic relationship between vehicles by using the connecting edges between adjacent nodes in the graph structure, expresses the dynamic relationship by using the weighted adjacent matrix reflecting different time-space relationships between the vehicles, is suitable for the problem of multi-workshop collision risk prediction in a peripheral multi-vehicle complex scene, and improves the accuracy and the practicability of the prediction model.
(2) According to the invention, the index-based multi-vehicle time and space relation index is respectively constructed through the collision time TTC and the parking sight distance SSD, so that the collision risk can be continuously expressed in a multi-vehicle interaction complex scene, and the method for measuring the real-time risk of driving is enriched.
(3) According to the method, the collision probability and the collision severity are comprehensively considered, a multi-workshop collision risk sample state division method based on historical vehicle driving track samples is constructed, and the problem that a multi-workshop collision risk prediction model sample is difficult to construct is solved.
Drawings
FIG. 1 is a flow chart of prediction of driving risk in a complex scene based on a multi-space diagram according to the invention;
FIG. 2 is a conceptual diagram of a multi-vehicle scene and diagram configuration according to the present invention;
fig. 3 is a schematic diagram of a risk fault tree of the multi-scene system according to the present invention.
Detailed Description
The invention will be further described with reference to the drawings and the specific embodiments, but the scope of the invention is not limited thereto.
As shown in fig. 1, the method for predicting the running risk of the complex scene based on the multi-space diagram specifically comprises the following steps:
step one, training an offline risk prediction model
Taking a self vehicle and a peripheral multi-vehicle at a certain moment as nodes in a graph, taking the vehicle position, the speed and the acceleration as node characteristics, constructing a node adjacent matrix reflecting different time-space relations among vehicles, obtaining a multi-time space graph describing a complex scene of the peripheral multi-vehicle by utilizing the nodes, the node characteristics and the node adjacent matrix in the graph, inputting the fused multi-time space graph into a graph convolutional neural network, and extracting a multi-time space graph characteristic vector of the scene; and taking the multi-time space diagram feature vector sequences extracted at each moment in the observation period as multi-step input features of the long-period memory neural network, and training multi-time space sequence samples in different risk states to obtain a driving risk prediction model. Specifically:
and (1) constructing a node adjacency matrix reflecting different time-space relations among vehicles by taking the own vehicles and the peripheral multi-vehicles at a certain moment as nodes in the graph and taking the vehicle positions, the speeds and the accelerations as node characteristics, so as to obtain a multi-time-space graph describing the complex scene of the peripheral multi-vehicles. The step (1) is realized by the following substeps:
step 1-1): research vehicle (bicycle C) 0 ) Vehicle C driving on middle lane of three-lane expressway 0 Front vehicle C adjacent to lane where own vehicle is located 1 Adjacent rear vehicle C in lane of own vehicle 2 Phase(s)Nearest preceding vehicle C on adjacent left lane 3 Nearest rear vehicle C on adjacent left lane 4 Nearest preceding vehicle C on adjacent right lane 5 Nearest rear vehicle C on adjacent right lane 6 The common composition is a multi-car scene { O: C 0 ,C 1 ,...,C 6 See fig. 2. In order to ensure the standardization of the subsequent calculation, if C is not detected in the detection range of the radar sensor of the vehicle 1 ,...,C 6 One of the vehicles C i Setting a vehicle C at a corresponding position 0 Virtual vehicle C having the same motion state (speed and acceleration) i '. For example, if the vehicle is located in the lane from the vehicle tail d s (furthest distance detectable by the vehicle radar sensor) no adjacent rear vehicle C is detected 2 The vehicle is separated from the vehicle tail d in the lane where the vehicle is located s Department sets up and own C 0 Virtual vehicle C with identical motion states 2 ' so that the total number of vehicles researching the multi-vehicle scene is kept consistent.
Step 1-2): in the multi-vehicle scenario of step 1-1), each vehicle in motion is in an interactive state, which can be described by a graphic structure: each vehicle C in the multi-vehicle scene in the step 1-1) 0 ,C 1 ,...,C 6 Is a node in the graph, and uses the longitudinal and transverse positions, the longitudinal and transverse speeds and the longitudinal and transverse accelerations F of the vehicle i ={d xi-0 ,d yi-0 ,V xi ,V yi ,a xi ,a yi Is node C i Features { d }, where xi-0 ,d yi-0 Is vehicle C i Centroid relative bicycle C 0 Relative longitudinal and lateral distances of centroid (for vehicle C 0 D is then x0-0 =0,d y0-0 =0),{V xi ,V yi ,a xi ,a yi Is vehicle C i Absolute longitudinal speed, absolute lateral speed, absolute longitudinal acceleration, absolute lateral acceleration relative to the ground, wherein the longitudinal axis x positive direction is along the vehicle running direction, and the transverse axis y positive direction is 90 degrees anticlockwise along the longitudinal axis positive direction. Simulating dynamic relationships between vehicles using edges between adjacent nodes, which are difficult to do by a spatial or temporal distance indexLine complete expression, building weighted adjacency matrix reflecting different time-space relationships between vehicles by the following sub-steps:
step 1-2-1): a time relation matrix based on time to collision TTC (in seconds) is constructed. Said step 1-2-1) is realized in particular by the following sub-steps:
step 1-2-1-1): computing multiple cars C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Time to collision TTC between i-j (including time to longitudinal collision TTC) xi-j And transverse time to collision TTC yi-j ):
Wherein d is xi-j And d yi-j Vehicle C with vertical axis x and horizontal axis y i Centroid relative to vehicle C j Relative longitudinal and transverse distances of centroid, V xi And V yi For vehicle C i Absolute longitudinal and absolute transverse speeds, L i And L j Respectively C i And C j Vehicle length W of (2) i And W is j Respectively C i And C j Epsilon of the vehicle width of (2) x And epsilon y For the random bias term, a small positive value (e.g., 1 e-6) can be taken, ensuring that the denominator is not 0 when the two vehicle speeds are equal.
Step 1-2-1-2): TTC according to the meaning of collision time i-j And (5) performing correction. When TTC is i-j When the collision time index is less than 0, the two vehicles have no collision risk in the time dimension, and the corresponding collision time index TTC 'can be realized' i-j Taking infinity:
step 1-2-1-3): based on corrected TTC' i-j Constructing a plurality of vehicles C in an exponential function form 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Time relation index TD between i-j (including the longitudinal time relation index TD xi-j And transverse time relation index TD yi-j ) The value is set to be (0, 1)]Between, and TTC' i-j Smaller represents smaller time distance between two vehicles and higher time relation index (higher degree of mutual influence of two vehicles in time dimension):
wherein sigma x Sum sigma y For the longitudinal and lateral normalization constants, the sum of the average reaction time of the driver and the average effective time of the vehicle brake can be taken.
Step 1-2-1-4): by time relation index TD xi-j Building a longitudinal time relation adjacency matrix A for weight Tx
Due to TD xi-j =TD xj-i ,A Tx Is a symmetrical matrix; a is that Tx The main diagonal element is 0 (i=j), and the 0 element of the remaining positions represents the vehicle C i And C j The positions of the two vehicles in the multi-vehicle scene in the step 1-1) are not adjacent, and the two vehicles have no direct influence relationship; and due to TD xi-j ∈(0,1],A Tx All element values E [0,1 ]]. And similarly obtaining a transverse time relation adjacent matrix A Ty
And A is a Tx Similarly, A Ty Is also a symmetric matrix and has all element values E [0,1 ]]。
Step 1-2-1-5): obtaining a multi-vehicle scene { O: C 0 ,C 1 ,...,C 6 Time relationship undirected graph G Tx And G Ty
G Tx =(V Tx ,E Tx )
G Ty =(V Ty ,E Ty )
Wherein the weights of the two pictures are A respectively Tx And A Ty Node V Tx =V Ty ={C 0 ,C 1 ,...,C 6 Edge E Tx =E Ty ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 } (see fig. 2).
Step 1-2-2): a spatial relationship matrix based on a parking sight distance SSD (in meters) is constructed. Said step 1-2-2) is realized in particular by the following sub-steps:
step 1-2-2-1): computing multiple cars C 0 ,C 1 ,...,C 6 Each vehicle C i Is (are) parking sight distance SSD i (including longitudinal parking stadia SSD xi Lateral parking stadia SSD yi ):
Wherein V is xi And V yi For step 1-2) the vehicle C i Absolute longitudinal speed and absolute transverse speed (in km/h); f (f) x And f y The longitudinal friction coefficient and the transverse friction coefficient are determined according to the vehicle speed and the road surface condition; g is gravity acceleration (9.8 m/s) 2 );t r For driver reaction time, 2.5s is generally desirable (including judgment time 1.5 seconds and running time 1.0 seconds).
Step 1-2-2-2): according to multiple vehicles C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Relative position relation between two vehicles is calculated to calculate collision distance SDI of two vehicles i-j (including the longitudinal collision distance SDI) xi-j And a lateral collision distance SDI yi-j ):
Step 1-2-2-3): based on the collision distance SDI i-j Constructing a plurality of vehicles C by adopting exponential function and reciprocal form 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Index SD of spatial relationship between i-j (including longitudinal spatial relationship index SD) xi-j And a transverse spatial relationship index SD yi-j ) Make its value greater than zero and SDI i-j Smaller represents smaller spatial distance between two vehicles and higher spatial relationship index of two vehicles (higher degree of mutual influence of two vehicles in spatial dimension):
step 1-2-2-4): by spatial relationship index SD xi-j For weight, building longitudinal space relation adjacency matrix A Sx
Due to SD xi-j =SD xj-i ,A Sx Is a symmetrical matrix; a is that Sx The main diagonal element is 0 (i=j), and the 0 element of the remaining positions represents the vehicle C i And C j In the step 1-1), the positions of the multiple scenes are not adjacent, and the two vehicles have no direct influence relationship. And the same is done to obtain a lateral spatial relationship adjacency matrix A Sy
And A is a Sx Similarly, A Sy Also a symmetric matrix.
Step 1-2-2-5): to ensure that each space-time relation adjacency matrix is not influenced by dimension, A is as follows Sx And A Sy Normalized to have all element values at 0,1]The time relation adjacency matrix A described in the step 1-2-1) Tx And A Ty Keep consistent):
A′ Sx =D(A Sx ) -1 A Sx
A′ Sy =D(A Sy ) -1 A Sy
wherein D (·) is a normalization coefficient function:
wherein A is i,j Is the element corresponding to the ith row and the jth column in the matrix A.
Step 1-2-2-6): obtaining a multi-vehicle scene { O: C 0 ,C 1 ,...,C 6 Spatial relationship undirected graph G Sx And G Sy
G Sx =(V Sx ,E Sx )
G Sy =(V Sy ,E Sy )
Wherein the weights of the two pictures are A 'respectively' Sx And A' Sy Node V Sx =V Sy ={C 0 ,C 1 ,...,C 6 Edge E Sx =E Sy ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 }。
And (2) carrying out weighted fusion on the multi-space diagrams of the peripheral multi-vehicle complex scene, inputting the multi-space diagrams into a diagram convolutional neural network GCN, and extracting multi-space diagram feature vectors of the multi-vehicle scene. The step (2) is realized by the following substeps:
step 2-1): a multi-space-time diagram G describing the peripheral multi-vehicle complex scene in the step (1) Tx 、G Ty 、G Sx And G Sy According to the weight vector (W Tx ,W Ty ,W Sx ,W Sy ) And (5) carrying out weighted fusion:
A f =W Tx A Tx +W Ty A Ty +W Sx A′ Sx +W Sy A′ Sy
wherein W is Tx ,W Ty ,W Sx ,W Sy E (0, 1) is a self-learning space-time diagram weight coefficient and satisfies the relation W Tx +W Ty +W Sx +W Sy =1. Finally obtaining the multi-vehicle scene { O: C ] 0 ,C 1 ,...,C 6 Mixed spatiotemporal relationship graph G f
G f =(V f ,E f )
Wherein the weight of the graph is A f Node V f ={C 0 ,C 1 ,...,C 6 Edge E f ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 The node characteristic vector is F) in the step 1-2) i =(d xi-0 ,d yi-0 ,V xi ,V yi ,a xi ,a yi ) I=0, 1,..6. Combining the node features to obtain a graph G f Is a feature matrix of the initial graph of (a):
step 2-2): based on the mixed space-time relation graph G of the step 2-1) f Constructing a graph convolutional neural network GCN, wherein the network commonly uses k layers of graph convolutional layers to extract multi-space graph features, and the feature propagation rule of each layer of network graph is as follows:
H l+1 =σ[D(A f ) -1 A f H l W l +H l B l ]
wherein sigma is a Sigmoid activation function; d (·) is a normalized coefficient function; h l As a first layer diagram feature, a 0 layer diagram feature is an initial diagram feature: h 0 =F;W l Is a self-learning layer-1 convolution weight matrix. In order to distinguish the difference in importance of the own vehicle node feature and the surrounding multi-vehicle node feature in future risk prediction, a central node (i.e., own vehicle C 0 ) Setting a self-learning weight matrix B l . The finally obtained k layers of features and the initial graph features are connected in series to obtain a multi-vehicle scene { O: C } 0 ,C 1 ,...,C 6 Multi-space-time diagram joint feature vector h= (H) 0 ,H 1 ,...,H k )
And (3) taking the multi-time space diagram feature vector sequences extracted at all times in the observation period as multi-step input features of the long-period memory neural network, and obtaining a driving risk prediction model by training multi-time space sequence samples with different risk levels. The step (3) is realized by the following substeps:
step 3-1): based on vehicle history driving track sample data acquired by unmanned aerial vehicle recognition, constructing a multi-vehicle scene taking a vehicle as a center according to the steps 1-1), sampling the multi-vehicle scene with the frequency of 10Hz (0.1 s) to acquire a multi-vehicle scene { O: C } 0 ,C 1 ,...,C 6 The transverse relative distance, longitudinal relative distance, transverse speed, longitudinal speed, transverse acceleration and longitudinal acceleration of each vehicle at each sampling point in the multi-vehicle scene are extracted according to the methods of the steps (1) and (2) to obtain a multi-space-time diagram joint feature vector H of the multi-vehicle scene at the sampling point t t
Step 3-2): the T multiplied by 0.1s is taken as the time window duration, and the multi-space-time diagram joint feature vector contained in the time window of the observation time point T is aggregated into a time sequence X t And input it as a sample of the long-short term memory neural network LSTM at time point t:
X t ={H t-T+1 ,H t-T+2 ,...,H t }
wherein T represents that the historical driving track sample data of the vehicle is sampled at the frequency of 10Hz in the past T multiplied by 0.1s time window until the observation time point T, and a total can be obtainedAnd sampling points.
And outputting a multi-workshop collision risk state (namely, outputting a sample of the LSTM at the time point T) within the time of T' x 0.1s from the time point T by adopting a Dense full connection form after the LSTM hidden layer, wherein the state comprises four types of high risk, medium and high risk, medium risk and low risk. In order to prevent the model from being over fitted, a random inactivated Dropout is added between the LSTM hidden layer and the Dense full-connection output layer, and the LSTM is matched with an input connection weight L2 regularization and learning rate attenuation technology to improve the over-fitting condition. The multi-workshop collision risk sample state dividing method is specifically realized through the following substeps:
step 3-2-1): based on the historical driving track data of the vehicle, the method is as described in the step 1)Longitudinal collision distance SDI x Calculation method acquiring surrounding vehicle C within time window of T' x 0.1s in future from observation time point T i With the bicycle C 0 Longitudinal collision distance index set { SDI } xi-0 (t+1),SDI xi-0 (t+2),...,,SDI xi-0 (t+t'), i=1, 2,..6. Wherein T 'represents sampling of the vehicle history running track sample data at a frequency of 10Hz within a time window of T' x 0.1s in the future from the observation time point T, and a total is obtainedAnd sampling points.
Step 3-2-2): computing vehicle C 0 With vehicle C i Collision probability index over a future T' ×0.1s time window:
wherein R is 0 (i) Is SDI in time window xi-0 The number of observation points is less than or equal to 0.
Step 3-2-3): computing vehicle C 0 With vehicle C i Impact severity index over a future T' ×0.1s time window:
wherein SDI is max (i) Is SDI in time window xi-0 Observed value with maximum absolute value under condition of less than or equal to 0, SDI cri Is SDI xi-0 The maximum possible value of the absolute value is less than or equal to 0. Can make self-vehicle C 0 And adjacent vehicle C i The speed of the vehicle is respectively the lowest value and the highest value of the speed limit of the expressway, such as when the vehicle C 0 Take V at the front x0 =60km/h,V xi =120 km/h, and then SDI is calculated as described in step 1-2-2) cri
Step 3-2-4): referring to FIG. 3, based on the fault tree principle, the own vehicle and surrounding vehicles C i Collision probability and (i=1,.,. 6)The severity of the collision is the basic event, so that the own vehicle and all surrounding vehicles C i (i=1,..6) as an intermediate event, calculating a system risk index for an event on top of the total collision risk (i.e., system failure) within a future T' ×0.1s time window, with the total collision risk of the own vehicle and surrounding multiple scenes as a top event:
wherein eta i =P c (i)·S c (i) For future T' ×0.1s time window, self-vehicle and surrounding vehicle C i Is the product of the collision probability and the collision severity; constant k i =0 or 1, taking different values depending on the lane keeping or changing behavior of the vehicle within the time window: if the own vehicle takes the action of keeping the lane, k 1 =k 2 =1,k 3 =k 4 =...=k 6 =0; if the own vehicle adopts left lane changing behavior, k is 1 =k 2 =k 3 =k 4 =1,k 5 =k 6 =0; if the own vehicle adopts the right lane changing behavior, k is 1 =k 2 =k 5 =k 6 =1,k 3 =k 4 =0. The judging standard of the lane keeping or changing behavior of the vehicle is as follows: the transverse distance between the center of mass of the self-vehicle and the central line of the road is set as x at the starting moment of a time window 0 At the end of the time window x 1 The lane width is w, if |x 1 -x 0 The level is less than or equal to w/3, and the own vehicle keeps the lane; if x 1 -x 0 The vehicle is turned left by the way of < -w/3; if x 1 -x 0 And if the ratio is more than w/3, the lane is changed from the right to the left.
Step 3-2-5): the calculated system risk index value is compared with a corresponding type risk threshold value (the lane keeping type threshold value is Th respectively k1 、Th k2 、Th k3 The method comprises the steps of carrying out a first treatment on the surface of the Lane change type thresholds are Th c1 、Th c2 、Th c3 ) A comparison is made to determine the sample output state. If the future T' ×0.1s time window is from the vehicle to keep the lane: when (when)The multi-workshop collision risk sample state is high risk; when->The multi-workshop collision risk sample state is medium and high risk; when->The multi-workshop collision risk sample state is a risk in a stroke; when->The multi-shop collision risk sample state is low risk. Lane change from vehicle if future T' ×0.1s time window: when->The multi-workshop collision risk sample state is high risk; when->The multi-workshop collision risk sample state is medium and high risk; when->The multi-workshop collision risk sample state is a risk in a stroke; when->The multi-shop collision risk sample state is low risk.
The magnitude of the risk status threshold may be determined and adjusted based on the percent of the sample system risk index value ordering. Dividing vehicle historical driving track data obtained by unmanned aerial vehicle aerial photo recognition according to the methods in steps 3-2-1) to 3-2-4), and calculating to obtain system risk index values of N samplesAccording to two types of lane keeping and lane changing of the own vehicleRespectively sorting, wherein the system risk index values corresponding to 5%, 15% and 25% before sorting of the two types are respectively selected as the high risk, medium and high risk and medium risk threshold { Th } k1 ,Th k2 ,Th k3 } (keep lane type) and { Th c1 ,Th c2 ,Th c3 -lane change type).
Step 3-3): dividing vehicle historical driving track data obtained by unmanned aerial vehicle aerial recognition according to the method in the step 3-2) by taking 0.2s as a time window moving step length, and finally obtaining risk prediction input and output samples of N' multiple vehicle scenes.
Step 3-4): taking the number of layers of the graph roll-up neural network GCN in the step (2) and the number of hidden layer nodes, the random inactivation Dropout rate, the L2 regularization coefficient and the learning rate attenuation coefficient of the LSTM in the step (3-2) as model super-parameters, taking a convolution weight matrix in the GCN and an input weight matrix and an offset term in an LSTM memory unit as model training parameters, and finally training to obtain a running risk GCN-LSTM prediction model based on a multi-space-time graph based on N' input and output samples obtained in the step (3-3).
Step two, real-time prediction of online risk model
And under the environment of the Internet of vehicles, the motion information of the self-vehicle and the surrounding multi-vehicle is obtained in real time through interaction of the sensors and the V2X, the multi-time space diagram feature vector sequence among all vehicles in the observation period is extracted in real time through the multi-time space diagram feature vector extraction method in the step one, the driving risk prediction model in the step one is input, and finally the multi-workshop collision risk state prediction result at the future moment is obtained. Specifically, the second step comprises the following steps:
step (1), facing the step one of the multi-vehicle scenes { O: C ] 0 ,C 1 ,...,C 6 The motion information of the self-vehicle and the surrounding multi-vehicle is obtained in real time through interaction of the sensors and the V2X under the environment of the Internet of vehicles, and comprises longitudinal and transverse positions, longitudinal and transverse speeds and longitudinal and transverse accelerations F of each vehicle as shown in the step one i ={d xi-0 ,d yi-0 ,V xi ,V yi ,a xi ,a yi -and vehicle length L i And width W i ,i=0,1,...,6。
And (2) extracting multi-space-time diagram feature vectors H corresponding to T (sampling frequency is 0.1 s) in T sampling time points T-T+1, T-T+2 in a time window of a predicted time point T by the multi-space-time diagram feature vector extraction method in the step (2) t-T+1 ,H t-T+2 ,...,H t Inputting the multi-workshop collision risk GCN-LSTM prediction model obtained by training in the step one, and outputting multi-workshop collision risk states (four states of high risk, medium and high risk, medium risk and low risk) within a future T' x 0.1s prediction duration in real time.
The examples are preferred embodiments of the present invention, but the present invention is not limited to the above-described embodiments, and any obvious modifications, substitutions or variations that can be made by one skilled in the art without departing from the spirit of the present invention are within the scope of the present invention.

Claims (7)

1. A complex scene driving risk prediction method based on a multi-space-time diagram is characterized by comprising the following steps:
s1, taking a self vehicle and a peripheral multi-vehicle at a certain moment as nodes in a graph, taking vehicle positions, speeds and accelerations as node characteristics, constructing a node adjacent matrix reflecting different time-space relations among vehicles, obtaining a multi-time-space graph describing a complex scene of the peripheral multi-vehicle by utilizing the nodes, the node characteristics and the node adjacent matrix in the graph, inputting the fused multi-time-space graph into a graph convolutional neural network, and extracting a multi-time-space graph characteristic vector of the scene; taking the multi-time space diagram feature vector sequences extracted at each moment in the observation period as multi-step input features of the long-period memory neural network, and training multi-time space sequence samples in different risk states to obtain a driving risk prediction model;
s2, acquiring motion information of the self-vehicle and the surrounding multi-vehicle in real time, extracting a multi-space-time diagram feature vector sequence among all vehicles in an observation period in real time, inputting the driving risk prediction model, and finally obtaining a multi-workshop collision risk state prediction result at a future moment;
the node adjacency matrix comprises a time relation matrix based on collision time TTC and a space relation matrix based on parking sight distance SSD;
the time relation matrix based on the collision time TTC specifically comprises the following steps:
1) Computing multiple cars C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Time to collision TTC between i-j
Wherein: TTC (TTC) xi-j TTC for longitudinal time to collision yi-j For transverse collision time d xi-j Vehicle C with longitudinal axis x i Centroid relative to vehicle C j Relative longitudinal distance of centroid, d yi-j Vehicle C with transverse axis y direction i Centroid relative to vehicle C j Relative lateral distance of centroid, V xi And V yi For vehicle C i Absolute longitudinal speed, absolute transverse speed, L i And L j Respectively C i 、C j Vehicle length W of (2) i And W is j Respectively C i 、C j Epsilon of the vehicle width of (2) x And epsilon y Is a random bias term;
2) When TTC is i-j When the collision time is less than 0, the corresponding collision time index TTC 'is set' i-j Taking infinity:
3) Based on TTC' i-j Constructing multiple vehicles C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Time relation index TD between i-j
Wherein TD xi-j As a longitudinal time relation index, TD yi-j As the transverse time relation index, sigma x Sum sigma y Is a normalized constant for the longitudinal and transverse directions;
4) By time relation index TD xi-j 、TD yi-j Respectively constructing a longitudinal time relation adjacency matrix A for weight Tx And a transverse time relation adjacency matrix A Ty
Wherein said A Tx 、A Ty Are symmetrical matrices, and the main diagonal element is 0, and if the vehicle C i And C j The positions of the adjacent matrixes are not adjacent in the multi-scene, and the elements in the corresponding adjacent matrixes are 0;
5) Obtaining a multi-vehicle scene { O: C 0 ,C 1 ,...,C 6 Time relationship undirected graph G Tx And G Ty
G Tx =(V Tx ,E Tx )
G Ty =(V Ty ,E Ty )
Wherein the weights of the two time relation undirected graphs are respectively A Tx And A Ty Node V Tx =V Ty ={C 0 ,C 1 ,...,C 6 Edge E Tx =E Ty ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 }。
2. The complex scene driving risk prediction method based on the multi-space-time diagram according to claim 1, wherein the spatial relation matrix based on the parking sight distance SSD specifically comprises:
1) Computing multiple cars C 0 ,C 1 ,...,C 6 Each vehicle C i Is (are) parking sight distance SSD i
Wherein SSD xi SSD for longitudinal parking line of sight yi For transverse parking sight distance, V xi And V yi Respectively vehicle C i Absolute longitudinal speed, absolute transverse speed, f x And f y The longitudinal friction coefficient and the transverse friction coefficient are respectively, g is the gravity acceleration, t r Reaction time for the driver;
2) According to multiple vehicles C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Relative position relation between two vehicles is calculated to calculate collision distance SDI of two vehicles i-j
Wherein SDI xi-j SDI for longitudinal impact distance yi-j For transverse collision distanceSeparating;
3) Based on the collision distance SDI i-j Constructing multiple vehicles C 0 ,C 1 ,...,C 6 Medium adjacent vehicle C i And C j Index SD of spatial relationship between i-j
Wherein SD is xi-j SD as a longitudinal spatial relationship index yi-j Is a transverse spatial relationship index;
4) By spatial relationship index SD xi-j 、SD yi-j For the weight, respectively constructing a longitudinal space relation adjacency matrix A Sx And a lateral spatial relationship adjacency matrix A Sy
Wherein said A Sx And A Sy Are symmetrical matrices, and the main diagonal element is 0, and if the vehicle C i And C j The positions of the adjacent matrixes are not adjacent in the multi-scene, and the elements in the corresponding adjacent matrixes are 0;
5) Pair A Sx And A Sy Performing standardization treatment
A′ Sx =D(A Sx ) -1 A Sx
A′ Sy =D(A Sy ) -1 A Sy
Wherein D (·) is a normalization coefficient function, and:
wherein A is i,j The elements corresponding to the ith row and the jth column in the matrix A;
6) Obtaining a multi-vehicle scene { O: C 0 ,C 1 ,...,C 6 Spatial relationship undirected graph G Sx And G Sy
G Sx =(V Sx ,E Sx )
G Sy =(V Sy ,E Sy )
Wherein the weights of the two space relation undirected graphs are respectively A' Sx And A' Sy Node V Sx =V Sy ={C 0 ,C 1 ,...,C 6 Edge E Sx =E Sy ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 }。
3. The complex scene driving risk prediction method based on the multi-space-time diagram according to claim 2, wherein the process of fusing the multi-space-time diagram is as follows: will G T x、G T y、G S X and G S y weight vector (W) Tx ,W Ty ,W Sx ,W Sy ) And (5) carrying out weighted fusion:
A f =W Tx A Tx +W Ty A Ty +W Sx A′ Sx +W Sy A′ Sy
wherein W is Tx ,W Ty ,W Sx ,W Sy E (0, 1) is a self-learning space-time diagram weight coefficient and satisfies the relation W Tx +W Ty +W Sx +W Sy =1; finally obtaining the multi-vehicle scene { O: C ] 0 ,C 1 ,...,C 6 Mixed spatiotemporal relationship graph G f
G f =(V f ,E f )
Wherein: the weight of the mixed space-time relation diagram is A f Node V f ={C 0 ,C 1 ,...,C 6 Edge E f ={C 0 C 1 ,C 0 C 2 ,C 0 C 3 ,C 0 C 4 ,C 0 C 5 ,C 0 C 6 ,C 1 C 3 ,C 1 C 5 ,C 2 C 4 ,C 2 C 6 ,C 3 C 4 ,C 5 C 6 Node feature vector of F i =(d xi-0 ,d yi-0 ,V xi ,V yi ,a xi ,a yi ) I=0, 1, 6; combining the node features to obtain a graph G f Is a feature matrix of the initial graph of (a):
4. the complex scene driving risk prediction method based on multiple space-time diagrams as claimed in claim 3, wherein the characteristic propagation rule of each layer of network diagram of the graph convolution neural network is as follows:
H l+1 =σ[D(A f ) -1 A f H l W l +H l B l ]
wherein: sigma is a Sigmoid activation function; h l The layer 0 graph features are initial graph features; w (W) l A first layer convolution weight matrix which can be self-learned; b (B) l Is a self-learning weight matrix;
the k layers of features and the initial graph features are connected in series to obtain a multi-vehicle scene { O: C } 0 ,C 1 ,...,C 6 Multi-space diagram joint feature vector: h= (H) 0 ,H 1 ,...,H k )。
5. The complex scene driving risk prediction method based on multi-space diagrams as claimed in claim 3, wherein the acquiring process of the multi-space sequence sample input of different risk states is as follows:
(1) Based on historical driving track data of the vehicle, acquiring transverse relative distance, longitudinal relative distance, transverse speed, longitudinal speed, transverse acceleration and longitudinal acceleration of each vehicle at each sampling point in a time window of T multiplied by 0.1s until an observation time point T; t represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in a time window of T multiplied by 0.1s until an observation time point T;
(2) Calculating multi-space-time diagram combined feature vectors of multi-vehicle scenes at all sampling points, and recording H t For the multi-space-time diagram joint feature vector of a multi-vehicle scene at a sampling point t, a time sequence X is obtained t ={H t-T+1 ,H t-T+2 ,...,H t The input is as a multi-car space sequence sample.
6. The complex scene driving risk prediction method based on the multi-space diagram according to claim 1, wherein the determining method of the multi-space sequence sample output states of the different risk states is characterized in that:
(1) Acquiring a peripheral vehicle C within a time window of T' ×0.1s in the future from an observation time point T based on vehicle history travel locus data i With the bicycle C 0 Longitudinal collision distance index set { SDI } xi-0 (t+1),SDI xi-0 (t+2),...,SDI xi-0 (t+t') }, i=1, 2, …,6; t 'represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in a time window of T' x 0.1s in the future from the observation time point T;
(2) Computing vehicle C 0 With vehicle C i Collision probability index in future T'. Times.0.1 s time window
Wherein R is 0 (i) Is SDI in time window xi-0 The number of observation points is less than or equal to 0;
(3) Computing vehicle C 0 With vehicle C i Collision severity index over future T'. Times.0.1 s time window
Wherein SDI is max (i) Is SDI in time window xi-0 Observed value with maximum absolute value under condition of less than or equal to 0, SDI cri Is SDI xi-0 Maximum possible value of absolute value under the condition of less than or equal to 0;
(4) By the own vehicle and the surrounding vehicles C i Is the basic event of collision probability and severity of collision of the own vehicle with the surrounding individual vehicles C i The collision risk between the two events is an intermediate event, the total collision risk of the own vehicle and the surrounding multiple scenes is taken as a top event, and a system risk index of the total collision risk top event in a time window of T' x 0.1s in the future is calculated:
wherein eta i =P c (i)·S c (i) For future T' ×0.1s time window, self-vehicle and surrounding vehicle C i The product of the collision probability and the collision severity of (2), a constant k i =0 or 1, taking different values according to the lane keeping or changing behavior of the own vehicle in the time window;
(5) Comparing the system risk index value with a risk threshold value of a corresponding type to determine the output state of a multi-workshop collision risk sample; the magnitude of the risk state threshold can be determined and adjusted according to the sorting percentage of the system risk index values.
7. The complex scene driving risk prediction method based on the multi-time space diagram according to claim 1, wherein the driving risk prediction model is a driving risk GCN-LSTM prediction model based on the multi-time space diagram, specifically, the number of layers of a graph convolution neural network and the number of hidden layers of LSTM, the number of hidden layer nodes, a random inactivation Dropout rate, an L2 regularization coefficient and a learning rate attenuation coefficient are taken as super parameters of the prediction model, a convolution weight matrix in the graph convolution neural network and an input weight matrix and bias items in an LSTM memory unit are taken as training parameters of the prediction model, and the driving risk GCN-LSTM prediction model based on the multi-time space diagram is finally trained based on multi-time space sequence samples in different risk states.
CN202110979340.3A 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram Active CN113762473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110979340.3A CN113762473B (en) 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110979340.3A CN113762473B (en) 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram

Publications (2)

Publication Number Publication Date
CN113762473A CN113762473A (en) 2021-12-07
CN113762473B true CN113762473B (en) 2024-04-12

Family

ID=78791124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110979340.3A Active CN113762473B (en) 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram

Country Status (1)

Country Link
CN (1) CN113762473B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333416A (en) * 2021-12-24 2022-04-12 阿波罗智能技术(北京)有限公司 Vehicle risk early warning method and device based on neural network and automatic driving vehicle
CN114613127B (en) * 2022-02-10 2023-04-07 江苏大学 Driving risk prediction method based on multi-layer multi-dimensional index system
CN114707856B (en) * 2022-04-02 2023-06-30 河南鑫安利安全科技股份有限公司 Risk identification analysis and early warning system based on computer vision
CN116246492B (en) * 2023-03-16 2024-01-16 东南大学 Vehicle lane change collision risk prediction method based on space-time attention LSTM and super-threshold model
CN116978236B (en) * 2023-09-25 2023-12-15 南京隼眼电子科技有限公司 Traffic accident early warning method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110352153A (en) * 2018-02-02 2019-10-18 辉达公司 It is analyzed in autonomous vehicle for the security procedure of Obstacle avoidance
CN112686281A (en) * 2020-12-08 2021-04-20 深圳先进技术研究院 Vehicle track prediction method based on space-time attention and multi-stage LSTM information expression
CN113642522A (en) * 2021-09-01 2021-11-12 中国科学院自动化研究所 Audio and video based fatigue state detection method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110352153A (en) * 2018-02-02 2019-10-18 辉达公司 It is analyzed in autonomous vehicle for the security procedure of Obstacle avoidance
CN112686281A (en) * 2020-12-08 2021-04-20 深圳先进技术研究院 Vehicle track prediction method based on space-time attention and multi-stage LSTM information expression
CN113642522A (en) * 2021-09-01 2021-11-12 中国科学院自动化研究所 Audio and video based fatigue state detection method and device

Also Published As

Publication number Publication date
CN113762473A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN113762473B (en) Complex scene driving risk prediction method based on multi-time space diagram
Altché et al. An LSTM network for highway trajectory prediction
Lee et al. Convolution neural network-based lane change intention prediction of surrounding vehicles for ACC
Xing et al. Energy oriented driving behavior analysis and personalized prediction of vehicle states with joint time series modeling
DE102020133744A1 (en) FOREGROUND EXTRACTION USING AREA ADJUSTMENT
CN112389440B (en) Vehicle driving risk prediction method in off-road environment based on vehicle-road action mechanism
CN105006147B (en) A kind of Link Travel Time estimating method based on road spatial and temporal association
CN115038628A (en) Object speed and/or yaw rate detection and tracking
GB2600196A (en) Vehicle operation using a dynamic occupancy grid
Wirthmüller et al. Predicting the time until a vehicle changes the lane using LSTM-based recurrent neural networks
CN114312830A (en) Intelligent vehicle coupling decision model and method considering dangerous driving conditions
US20230005173A1 (en) Cross-modality active learning for object detection
CN113552883B (en) Ground unmanned vehicle autonomous driving method and system based on deep reinforcement learning
CN110182217A (en) A kind of traveling task complexity quantitative estimation method towards complicated scene of overtaking other vehicles
CN115009304A (en) End-to-end-based automatic driving vehicle implementation method
CN110097571B (en) Quick high-precision vehicle collision prediction method
CN115143950A (en) Intelligent automobile local semantic grid map generation method
CN114901534A (en) Object detection and tracking
Gao et al. Discretionary cut-in driving behavior risk assessment based on naturalistic driving data
Zhao et al. Improving autonomous vehicle visual perception by fusing human gaze and machine vision
Dubey et al. Autonomous braking and throttle system: A deep reinforcement learning approach for naturalistic driving
McNew Predicting cruising speed through data-driven driver modeling
Lv et al. Traffic safety detection system by digital twins and virtual reality technology
CN113276860B (en) Vehicle control method, device, electronic device, and storage medium
CN110610611B (en) Driving safety evaluation method for intelligent network-connected vehicle in mixed-driving traffic flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant