CN113762473A - Complex scene driving risk prediction method based on multi-space-time diagram - Google Patents

Complex scene driving risk prediction method based on multi-space-time diagram Download PDF

Info

Publication number
CN113762473A
CN113762473A CN202110979340.3A CN202110979340A CN113762473A CN 113762473 A CN113762473 A CN 113762473A CN 202110979340 A CN202110979340 A CN 202110979340A CN 113762473 A CN113762473 A CN 113762473A
Authority
CN
China
Prior art keywords
vehicle
time
space
collision
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110979340.3A
Other languages
Chinese (zh)
Other versions
CN113762473B (en
Inventor
熊晓夏
蔡英凤
高翔
王海
刘擎超
沈钰杰
陈龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202110979340.3A priority Critical patent/CN113762473B/en
Publication of CN113762473A publication Critical patent/CN113762473A/en
Application granted granted Critical
Publication of CN113762473B publication Critical patent/CN113762473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a multi-spatiotemporal graph-based complex scene driving risk prediction method, which comprises the steps of constructing a multi-spatiotemporal graph for describing different spatiotemporal relations between vehicles in a peripheral multi-vehicle complex scene, inputting the fused multi-spatiotemporal graph into a graph convolution neural network, and extracting multi-spatiotemporal graph feature vectors of the scene; taking a multi-spatio-temporal map feature vector sequence extracted at each moment in an observation period as a multi-step input feature of the long-short term memory neural network, and training multi-vehicle spatio-temporal sequence samples in different risk states to obtain a driving risk prediction model; and obtaining motion information of the self vehicle and the surrounding vehicles in real time, extracting a multi-spatiotemporal map feature vector sequence among all vehicles in an observation period in real time, inputting the multi-spatiotemporal map feature vector sequence into the driving risk prediction model, and finally obtaining a multi-workshop collision risk state prediction result at a future moment. The method is suitable for predicting the multi-workshop collision risk under the peripheral multi-vehicle complex scene, and improves the accuracy and the practicability of the prediction model.

Description

Complex scene driving risk prediction method based on multi-space-time diagram
Technical Field
The invention relates to the technical field of traffic safety evaluation and active safety of intelligent traffic systems, in particular to a complex scene driving risk prediction method based on a multi-space-time diagram.
Background
The collision risk estimation algorithm is one of the cores of the intelligent automobile active safety technology, and the performance of the risk estimation algorithm directly determines the timeliness and reliability of system early warning or active intervention, so that the collision risk estimation algorithm is also one of the research key contents of current automobile manufacturers and scientific researchers. The traditional collision risk estimation algorithm mainly quantifies the risk levels of different driving conditions through indexes representing initial collision states, namely, risk estimation indexes are calculated based on two workshop motion parameters at the scene initial moment, and the risk degree of the scene is divided by comparing the calculated value of the risk estimation indexes with preset values representing different risk levels. The indexes are mainly classified into three categories, namely distance-based parameters, time-based parameters and deceleration-based parameters, which are commonly used, such as critical early warning distance and critical braking distance, collision time and reciprocal thereof, workshop time, post-intrusion time, deceleration required by collision avoidance and the like. However, these indexes are usually only suitable for predicting collision risk between two vehicles in a specific longitudinal or transverse scene, and generally do not consider the problem of collision risk between multiple vehicles in a complex surrounding scene, which needs to be faced during actual driving, and thus are not beneficial to the application of this kind of method in a complex real driving scene. Meanwhile, as the occurrence of the collision accident is a small-probability event, the multi-workshop collision risk samples are often difficult to obtain and divide states, so that the construction of a multi-workshop collision risk prediction model is more difficult. Therefore, it is necessary to research a driving risk prediction method that can fully consider peripheral multi-vehicle interaction complex scene characteristics.
Disclosure of Invention
In view of the above, the invention provides a complex scene driving risk prediction method based on a multi-space-time diagram.
The present invention achieves the above-described object by the following technical means.
A complex scene driving risk prediction method based on a multi-space-time diagram comprises the following steps:
s1, taking a self vehicle and peripheral multiple vehicles at a certain moment as nodes in the graph, taking the position, speed and acceleration of the vehicle as node characteristics, constructing a node adjacency matrix reflecting different space-time relations among the vehicles, obtaining a multi-space-time graph describing a peripheral multiple vehicle complex scene by using the nodes, the node characteristics and the node adjacency matrix in the graph, inputting the fused multi-space-time graph into a graph convolution neural network, and extracting a multi-space-time graph feature vector of the scene; taking a multi-spatio-temporal map feature vector sequence extracted at each moment in an observation period as a multi-step input feature of the long-short term memory neural network, and training multi-vehicle spatio-temporal sequence samples in different risk states to obtain a driving risk prediction model;
and S2, acquiring motion information of the vehicle and the surrounding vehicles in real time, extracting a multi-spatiotemporal map feature vector sequence among all vehicles in an observation period in real time, inputting the multi-spatiotemporal map feature vector sequence into the driving risk prediction model, and finally obtaining a multi-workshop collision risk state prediction result at a future moment.
In the above technical solution, the node adjacency matrix includes a time relation matrix based on the time to collision TTC and a space relation matrix based on the parking visibility SSD.
In the above technical solution, the time relationship matrix based on the time to collision TTC specifically includes:
1) calculate multiple cars C0,C1,...,C6Middle adjacent vehicle CiAnd CjTime to collision TTC betweeni-j
Figure BDA0003228521200000021
Figure BDA0003228521200000022
Wherein: TTCxi-jFor time to longitudinal collision, TTCyi-jAs transverse collision time, dxi-jIs a vehicle C with the longitudinal axis x downwardiRelative center of massVehicle CjRelative longitudinal distance of center of mass, dyi-jVehicle C in the y direction of the horizontal axisiCenter of mass relative to vehicle CjRelative transverse distance of center of mass, VxiAnd VyiFor vehicle CiAbsolute longitudinal velocity, absolute lateral velocity, LiAnd LjAre respectively Ci、CjLength of vehicle, WiAnd WjAre respectively Ci、CjWidth of vehicle of epsilonxAnd εyIs a random deviation term;
2) when TTCi-jIf < 0, let the corresponding time to collision index TTC'i-jGetting infinity:
Figure BDA0003228521200000023
Figure BDA0003228521200000024
3) based on TTC'i-jConstruction of multiple cars C0,C1,...,C6Middle adjacent vehicle CiAnd CjTime relation index TD betweeni-j
Figure BDA0003228521200000025
Figure BDA0003228521200000026
Wherein, TDxi-jAs an indication of longitudinal time relationship, TDyi-jAs an indicator of the transverse time relationship, σxAnd σyNormalized constants for the machine direction and the transverse direction;
4) by time-related index TDxi-j、TDyi-jFor weighting, respectively constructing vertical time relation adjacency matrixes ATxAnd a transverse time relationship adjacency matrix ATy
Wherein saidATx、ATyAre all symmetric matrices with a main diagonal element of 0, and if vehicle CiAnd CjIf the positions in the multi-vehicle scene are not adjacent, the element in the corresponding adjacent matrix is 0;
5) obtain a multiple-vehicle scene { O: C0,C1,...,C6Time relation undirected graph G ofTxAnd GTy
GTx=(VTx,ETx)GTy=(VTy,ETy)
Wherein the weights of the two time-relation undirected graphs are respectively ATxAnd ATyNode VTx=VTy={C0,C1,...,C6}, connecting edge ETx=ETy={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6}。
In the above technical solution, the spatial relationship matrix based on the parking visibility SSD specifically includes:
1) calculate multiple cars C0,C1,...,C6Each vehicle CiParking visibility SSDi
Figure BDA0003228521200000031
Figure BDA0003228521200000032
Wherein the SSDxiFor longitudinal parking line of sight, SSDyiFor transverse parking vision distance, VxiAnd VyiAre respectively a vehicle CiAbsolute longitudinal velocity, absolute lateral velocity, fxAnd fyLongitudinal friction coefficient and transverse friction coefficient respectively, g is gravity accelerationDegree, trReaction time for the driver;
2) according to multiple vehicles C0,C1,...,C6Middle adjacent vehicle CiAnd CjRelative position relation between the two vehicles, and calculate the collision distance SDI between the two vehiclesi-j
Figure BDA0003228521200000033
Figure BDA0003228521200000034
Wherein, SDIxi-jFor longitudinal collision distance, SDIyi-jIs the lateral collision distance;
3) based on collision distance SDIi-jConstruction of multiple cars C0,C1,…,C6Middle adjacent vehicle CiAnd CjSpatial relationship index SD betweeni-j
Figure BDA0003228521200000035
Figure BDA0003228521200000041
Wherein, SDxi-jAs an index of longitudinal spatial relationship, SDyi-jIs a transverse spatial relationship index;
4) by a spatial relationship index SDxi-j、SDyi-jFor weighting, respectively constructing a vertical spatial relationship adjacency matrix ASxAnd a transverse spatial relationship adjacency matrix ASy
Wherein A isSxAnd ASyAre all symmetric matrices with a main diagonal element of 0, and if vehicle CiAnd CjIf the positions in the multi-vehicle scene are not adjacent, the element in the corresponding adjacent matrix is 0;
5) to ASxAnd ASyGo on to standardizationTheory of things
A′Sx=D(ASx)-1ASx
A′Sy=D(ASy)-1ASy
Wherein D (-) is a normalization coefficient function, and:
Figure BDA0003228521200000042
wherein A isi,jElements corresponding to the ith row and the jth column in the matrix A are shown;
6) obtain a multiple-vehicle scene { O: C0,C1,...,C6} spatial relationship undirected graph GSxAnd GSy
GSx=(VSx,ESx)
GSy=(VSy,ESy)
Wherein the weights of the two spatial relationship undirected graphs are respectively A'SxAnd A'SyNode VSx=VSy={C0,C1,...,C6}, connecting edge ESx=ESy={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6}。
In the above technical solution, the process of fusing multiple space-time diagrams is as follows: g is to beTx、GTy、GSxAnd GSyAccording to the weight vector (W)Tx,WTy,WSx,WSy) And (3) performing weighted fusion:
Af=WTxATx+WTyATy+WSxA′Sx+WSyA′Sy
wherein, WTx,WTy,WSx,WSyEpsilon (0,1) is self-learningThe weight coefficient of the space-time diagram satisfies the relation WTx+WTy+WSx+WSy1 is ═ 1; finally, a multi-vehicle scene { O: C:0,C1,...,C6mixed space-time relationship diagram G off
Gf=(Vf,Ef)
Wherein: the weight of the mixed space-time relationship graph is AfNode Vf={C0,C1,...,C6And connecting edges Ef ═ C0C1, C0C2, C0C3, C0C4, C0C5, C0C6, C1C3, C1C5, C2C4, C2C6, C3C4, and C5C6}, wherein a node feature vector is Fi=(dxi-0,dyi-0,Vxi,Vyi,axi,ayi) I is 0,1,. 6; all the node characteristics are combined to obtain a mixed space-time relationship graph GfInitial graph feature matrix of (1):
Figure BDA0003228521200000051
in the above technical solution, the feature propagation rule of each layer network graph of the graph convolution neural network is as follows:
Hl+1=σ[D(Af)-1AfHlWl+HlBl]
wherein: sigma is a Sigmoid activation function; hlThe first layer diagram characteristic and the 0 th layer diagram characteristic are initial diagram characteristics; wlIs a self-learning first layer convolution weight matrix; b islIs a self-learnable weight matrix;
connecting the k-layer characteristics and the initial graph characteristics in series to obtain a multi-vehicle scene { O: C0,C1,...,C6The joint feature vector of the multi-space-time diagram: h ═ H (H0, H)1,...,Hk)。
In the above technical solution, the process of obtaining the multi-vehicle time-space sequence sample inputs in different risk states includes:
(1) acquiring the transverse relative distance, the longitudinal relative distance, the transverse speed, the longitudinal speed, the transverse acceleration and the longitudinal acceleration of each vehicle at each sampling point in the past T multiplied by 0.1s time window from the observation time point T based on the historical driving track data of the vehicles; t represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in the past T multiplied by 0.1s time window from the observation time point T;
(2) calculating the multi-space-time map combined characteristic vector of the multi-vehicle scene at each sampling point, and recording HtCombining the time sequence X for the multi-space-time diagram of the multi-vehicle scene at the sampling point t with the characteristic vectort={Ht-T+1,Ht-T+2,...,HtAnd inputting as multi-vehicle space-time sequence samples.
In the above technical solution, the method for determining the output states of the multi-vehicle time-space sequence samples in different risk states comprises:
(1) based on the historical driving track data of the vehicle, the peripheral vehicle C in the time window of T' multiplied by 0.1s from the beginning of the observation time point T is obtainediAnd bicycle C0Set of longitudinal collision distance indices { SDIxi-0(t+1),SDIxi-0(t+2),...,,SDIxi-0(T + T') }, i ═ 1,2, …, 6; t 'represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in a time window of T' × 0.1s in the future from the observation time point T;
(2) calculating bicycle C0And vehicle CiCollision probability indicator in future T' x 0.1s time window
Figure BDA0003228521200000061
Wherein R is0(i) For SDI within a time windowxi-0The number of observation points is less than or equal to 0;
(3) calculating bicycle C0And vehicle CiCrash severity indicator in future T' × 0.1s time window
Figure BDA0003228521200000062
Wherein SDImax(i) For SDI within a time windowxi-0The observed value with the maximum absolute value under the condition of not more than 0,SDIcriIs SDIxi-0The maximum possible value of the absolute value under the condition of less than or equal to 0;
(4) with the own vehicle and each vehicle C aroundiThe collision probability and the collision severity of the vehicle are basic events for the own vehicle and the surrounding vehicles CiTaking the total collision risk of the own vehicle and the surrounding multi-vehicle scene as an overhead event, and calculating the system risk index of the total collision risk overhead event in a future T' multiplied by 0.1s time window:
Figure BDA0003228521200000063
wherein etai=Pc(i)·Sc(i) For the vehicles C in the future T' x 0.1s time windowiThe product of the collision probability and the collision severity, constant kiTaking different values according to the lane keeping or changing behavior of the vehicle in the time window;
(5) comparing the system risk index value with the risk threshold value of the corresponding type, and determining the output state of the multi-workshop collision risk sample; wherein the size of the risk state threshold may be determined and adjusted according to the system risk indicator value ranking percentage.
In the technical scheme, the driving risk prediction model is a driving risk GCN-LSTM prediction model based on a multi-space-time diagram, specifically, the number of layers of a convolutional neural network, the number of hidden layers of the LSTM, the number of nodes of the hidden layers, a random inactivation Dropout rate, an L2 regularization coefficient and a learning rate attenuation coefficient are used as prediction model hyper-parameters, a convolution weight matrix in the convolutional neural network and an input weight matrix and an offset term in an LSTM memory unit are used as prediction model training parameters, and a multi-vehicle space-time sequence sample based on different risk states is finally trained to obtain the driving risk GCN-LSTM prediction model based on the multi-space-time diagram.
The invention has the beneficial effects that:
(1) the invention simulates the dynamic relation between vehicles by the connecting edges between adjacent nodes in the graph structure, and expresses the dynamic relation by the weighted adjacent matrix reflecting different time-space relations between the vehicles, thereby being suitable for the multi-workshop collision risk prediction problem under the peripheral multi-vehicle complex scene and improving the accuracy and the practicability of the prediction model.
(2) According to the invention, the multi-vehicle time and space relation indexes based on the exponential form are respectively constructed through the TTC and the SSD, so that the collision risk can be continuously expressed in a multi-vehicle interactive complex scene, and a method for measuring the real-time risk of driving is enriched.
(3) According to the method, the collision probability and the collision severity are comprehensively considered, the multi-compartment collision risk sample state division method based on the historical vehicle running track sample is constructed, and the problem that the multi-compartment collision risk prediction model sample is difficult to construct is solved.
Drawings
FIG. 1 is a flow chart of the complex scene driving risk prediction based on a multi-space-time diagram according to the present invention;
FIG. 2 is a conceptual illustration of a multi-vehicle scenario and diagram configuration according to the present invention;
FIG. 3 is a schematic diagram of a risk fault tree of the multi-vehicle scenario system according to the present invention.
Detailed Description
The invention will be further described with reference to the following figures and specific examples, but the scope of the invention is not limited thereto.
As shown in fig. 1, a complex scene driving risk prediction method based on a multi-space-time diagram specifically includes the following steps:
step one, training an offline risk prediction model
Taking a self vehicle and peripheral multiple vehicles at a certain moment as nodes in a graph, taking the position, speed and acceleration of the vehicles as node characteristics, constructing a node adjacency matrix reflecting different space-time relations among the vehicles, obtaining a multi-space-time graph describing a peripheral multiple vehicle complex scene by using the nodes, the node characteristics and the node adjacency matrix in the graph, inputting the fused multi-space-time graph into a graph convolution neural network, and extracting a multi-space-time graph characteristic vector of the scene; and (3) taking a multi-spatio-temporal map feature vector sequence extracted at each moment in an observation period as a multi-step input feature of the long-term and short-term memory neural network, and training multi-vehicle spatio-temporal sequence samples in different risk states to obtain a driving risk prediction model. Specifically, the method comprises the following steps:
and (1) taking the own vehicle and the peripheral multiple vehicles at a certain moment as nodes in the graph, and taking the position, the speed and the acceleration of the vehicles as node characteristics to construct a node adjacency matrix reflecting different space-time relations among the vehicles so as to obtain the multi-space-time graph describing the peripheral multiple-vehicle complex scene. The step (1) is specifically realized by the following substeps:
step 1-1): research vehicle (bicycle C)0) Running on the middle lane of a three-lane highway, self-vehicle C0Front vehicle C adjacent to lane where self vehicle is located1And a rear vehicle C adjacent to the lane where the self vehicle is positioned2The nearest front vehicle C on the adjacent left lane3The nearest rear vehicle C on the adjacent left lane4The nearest front vehicle C on the adjacent right lane5The nearest rear vehicle C on the adjacent right lane6Jointly constitute a multi-vehicle scene (O: C)0,C1,...,C6See fig. 2. If C is not detected in the detection range of the radar sensor of the self vehicle in order to ensure the subsequent calculation standardization1,...,C6One of the vehicles CiThen, set up and the bicycle C at the corresponding position0Virtual vehicles C with the same motion state (speed and acceleration)i'. For example, if the vehicle is located within the lane and spaced from the vehicle rear ds(farthest distance detectable by the radar sensor of the host vehicle) no adjacent rear vehicle C is detected2Then, the distance d from the tail of the bicycle in the lane where the bicycle is locatedsPlace and place with the bicycle C0Virtual vehicle C with identical motion state2', so that the total number of vehicles studying the multi-vehicle scenario remains consistent.
Step 1-2): in the multi-vehicle scenario of step 1-1), the vehicles in motion are in an interacting state, which can be described by a graph structure: step 1-1) of each vehicle C in the multi-vehicle scene0,C1,...,C6The nodes in the graph are represented by the longitudinal and transverse positions, the longitudinal and transverse speeds and the longitudinal and transverse accelerations F of the vehiclei={dxi-0,dyi-0,Vxi,Vyi,axi,ayiIs node CiFeature wherein { dxi-0,dyi-0Is a vehicle CiCenter of mass relative bicycle C0Relative longitudinal and transverse distances of center of mass (for bicycle C)0Then d isx0-0=0,dy0-0=0),{Vxi,Vyi,axi,ayiIs a vehicle CiAbsolute longitudinal speed, absolute transverse speed, absolute longitudinal acceleration and absolute transverse acceleration relative to the ground, wherein the positive direction of a longitudinal axis x is taken along the driving direction of the vehicle, and the positive direction of a transverse axis y is taken anticlockwise by 90 degrees along the positive direction of the longitudinal axis. Simulating a dynamic relationship between vehicles by using connecting edges between adjacent nodes, wherein the relationship is difficult to completely express through a space or time distance index, and constructing a weighted adjacency matrix reflecting different space-time relationships between vehicles through the following sub-steps:
step 1-2-1): a time relation matrix based on the time to collision TTC (in seconds) is constructed. The step 1-2-1) is specifically realized by the following substeps:
step 1-2-1-1): calculate multiple cars C0,C1,...,C6Middle adjacent vehicle CiAnd CjTime to collision TTC betweeni-j(including time to longitudinal Collision TTCxi-jAnd time to transverse collision TTCyi-j):
Figure BDA0003228521200000081
Figure BDA0003228521200000082
Wherein d isxi-jAnd dyi-jThe vehicle C is arranged in the directions of the longitudinal axis x and the transverse axis yiCenter of mass relative to vehicle CjRelative longitudinal and transverse distances of centers of mass, VxiAnd VyiFor vehicle CiAbsolute longitudinal and transverse velocities of (1), LiAnd LjAre respectively CiAnd CjLength of vehicle, WiAnd WjAre respectively CiAnd CjWidth of vehicle of epsilonxAnd εyFor the random bias term, a small positive value (e.g., 1e-6) may be taken to ensure that the denominator is not 0 when the split speeds are equal.
Step 1-2-1-2): TTC according to the meaning of time of collisioni-jAnd (6) correcting. When TTCi-jWhen the time is less than 0, the two vehicles do not have collision risks in the time dimension, and the corresponding collision time index TTC 'can be ensured'i-jGetting infinity:
Figure BDA0003228521200000083
Figure BDA0003228521200000084
step 1-2-1-3): based on corrected TTC'i-jConstructing multi-vehicle C in the form of exponential function0,C1,...,C6Middle adjacent vehicle CiAnd CjTime relation index TD betweeni-j(including the longitudinal time relationship index TDxi-jAnd transverse time relation index TDyi-j) So that its value is at (0,1)]And TTC'i-jSmaller means smaller time distance between two vehicles, higher time relation index between two vehicles (higher mutual influence degree of two vehicles in time dimension):
Figure BDA0003228521200000091
Figure BDA0003228521200000092
wherein sigmaxAnd σyThe sum of the driver average reaction time and the vehicle brake average effective time can be taken as the longitudinal and lateral normalization constants.
Step 1-2-1-4): by time-related index TDxi-jFor weighting, constructing longitudinal timeRelational adjacency matrix ATx
Figure BDA0003228521200000093
Due to TDxi-j=TDxj-i,ATxIs a symmetric matrix; a. theTxThe main diagonal element is 0(i ═ j), and the 0 elements in the remaining positions represent the vehicle CiAnd CjPositions in the multi-vehicle scene in the step 1-1) are not adjacent, and the two vehicles do not have a direct influence relation; and due to TDxi-j∈(0,1],ATxAll element values in (E) 0,1]. Obtaining the adjacent matrix A of the transverse time relation in the same wayTy
Figure BDA0003228521200000094
And ATxSimilarly, ATyIs also a symmetric matrix and all element values are E [0,1 ]]。
Step 1-2-1-5): obtain a multiple-vehicle scene { O: C0,C1,...,C6Time relation undirected graph G ofTxAnd GTy
GTx=(VTx,ETx)
GTy=(VTy,ETy)
Wherein the weights of the two graphs are respectively ATxAnd ATyNode VTx=VTy={C0,C1,...,C6}, connecting edge ETx=ETy={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6(see fig. 2).
Step 1-2-2): a spatial relationship matrix based on the parking apparent distance SSD (in meters) is constructed. The step 1-2-2) is specifically realized by the following sub-steps:
step 1-2-2-1): calculate multiple cars C0,C1,...,C6Each vehicle CiParking visibility SSDi(including longitudinal parking line of sight SSD)xiAnd transverse parking line of sight SSDyi):
Figure BDA0003228521200000101
Figure BDA0003228521200000102
Wherein VxiAnd VyiFor step 1-2) said vehicle CiAbsolute longitudinal velocity and absolute lateral velocity (in km/h); f. ofxAnd fyThe longitudinal friction coefficient and the transverse friction coefficient are determined according to the vehicle speed and the road surface condition; g is the acceleration of gravity (9.8 m/s)2);trFor the driver reaction time, 2.5s (including 1.5 s judgment time and 1.0 s running time) can be taken.
Step 1-2-2-2): according to multiple vehicles C0,C1,...,C6Middle adjacent vehicle CiAnd CjRelative position relation between the two vehicles, and calculate the collision distance SDI between the two vehiclesi-j(including longitudinal crash distance SDIxi-jAnd lateral collision distance SDIyi-j):
Figure BDA0003228521200000103
Figure BDA0003228521200000104
Step 1-2-2-3): based on collision distance SDIi-jConstructing multi-vehicle C by using exponential function and reciprocal form0,C1,...,C6Middle adjacent vehicle CiAnd CjSpatial relationship index SD betweeni-j(including the longitudinal spatial relationship index SD)xi-jAnd the transverse spatial relationship index SDyi-j) To a value greater than zero, and SDIi-jSmaller means smaller spatial distance between two vehicles, higher spatial relationship index of two vehicles (higher mutual influence degree of two vehicles in spatial dimension):
Figure BDA0003228521200000111
Figure BDA0003228521200000112
step 1-2-2-4): by a spatial relationship index SDxi-jFor weighting, a vertical spatial relationship adjacency matrix A is constructedSx
Figure BDA0003228521200000113
Due to SDxi-j=SDxj-i,ASxIs a symmetric matrix; a. theSxThe main diagonal element is 0(i ═ j), and the 0 elements in the remaining positions represent the vehicle CiAnd CjPositions in the multi-vehicle scene in the step 1-1) are not adjacent, and the two vehicles do not have a direct influence relation. Obtaining the adjacent matrix A of the transverse space relation by the same methodSy
Figure BDA0003228521200000114
And ASxSimilarly, ASyAlso a symmetric matrix.
Step 1-2-2-5): in order to ensure that each space-time relation adjacency matrix is not influenced by dimension, ASxAnd ASyNormalization was performed to obtain values of all elements at [0,1 ]]With (and the temporal relation adjacency matrix A in step 1-2-1)TxAnd ATyRemain consistent):
A′Sx=D(ASx)-1ASx
A′Sy=D(ASy)-1ASy
where D (-) is a normalization coefficient function:
Figure BDA0003228521200000121
wherein A isi,jThe matrix a is the element corresponding to the ith row and the jth column.
Step 1-2-2-6): obtain a multiple-vehicle scene { O: C0,C1,...,C6} spatial relationship undirected graph GSxAnd GSy
GSx=(VSx,ESx)
GSy=(VSy,ESy)
Wherein the weights of the two figures are respectively A'SxAnd A'SyNode VSx=VSy={C0,C1,...,C6}, connecting edge ESx=ESy={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6}。
And (2) performing weighted fusion on the multi-space-time maps of the peripheral multi-vehicle complex scene, inputting the multi-space-time maps into a graph convolution neural network (GCN), and extracting the multi-space-time map feature vectors of the multi-vehicle scene. The step (2) is specifically realized by the following sub-steps:
step 2-1): describing a multi-space-time map G of the surrounding multi-vehicle complex scene in the step (1)Tx、GTy、GSxAnd GSyAccording to the weight vector (W)Tx,WTy,WSx,WSy) And (3) performing weighted fusion:
Af=WTxATx+WTyATy+WSxA′Sx+WSyA′Sy
wherein WTx,WTy,WSx,WSyBelongs to (0,1) as a self-learning space-time diagram weight coefficient and satisfies a relation WTx+WTy+WSx+WSy1. Finally, a multi-vehicle scene { O: C:0,C1,...,C6mixed space-time relationship diagram G off
Gf=(Vf,Ef)
Wherein the weight of the graph is AfNode Vf={C0,C1,...,C6}, connecting edge Ef={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6F, node feature vector is the F in step 1-2)i=(dxi-0,dyi-0,Vxi,Vyi,axi,ayi) I is 0, 1. Merging the characteristics of all nodes to obtain a graph GfInitial graph feature matrix of (1):
Figure BDA0003228521200000122
step 2-2): based on the mixed space-time relationship graph G of the step 2-1)fConstructing a graph convolution neural network GCN, wherein the network extracts multi-spatiotemporal graph features by using k-layer graph convolution layers in common, and the feature propagation rule of each layer of the network graph is as follows:
Hl+1=σ[D(Af)-1AfHlWl+HlBl]
wherein σ is a Sigmoid activation function; d (-) is a normalization coefficient function; hlThe first-level graph characteristic is the first-level graph characteristic, and the 0-level graph characteristic is the initial graph characteristic: h0=F;WlIs a self-learning l-th layer convolution weight matrix. In order to distinguish the difference of importance of the own vehicle node characteristics and the peripheral multi-vehicle node characteristics in the future risk prediction, a central node is specially used in each layer (namely the own vehicle C)0) Setting a self-learning weight matrix Bl. Connecting the finally obtained k-layer characteristics and the initial graph characteristics in series to obtain a multi-vehicle scene { O: C0,C1,...,C6The joint feature vector H of the multi-space-time diagram is (H)0,H1,...,Hk)
And (3) taking a multi-spatio-temporal map feature vector sequence extracted at each moment in an observation period as a multi-step input feature of the long-term and short-term memory neural network, and training multi-vehicle spatio-temporal sequence samples with different risk levels to obtain a driving risk prediction model. The step (3) is specifically realized by the following sub-steps:
step 3-1): based on vehicle historical driving track sample data acquired by unmanned aerial vehicle aerial photography identification, constructing a multi-vehicle scene with a self vehicle as a center according to the step 1-1), sampling the multi-vehicle scene with 10Hz (0.1s) as a frequency, and acquiring the multi-vehicle scene { O: C0,C1,...,C6Extracting the multi-space-time map combined characteristic vector H of the multi-vehicle scene at the sampling point t according to the method in the steps (1) and (2) and according to the transverse relative distance, the longitudinal relative distance, the transverse speed, the longitudinal speed, the transverse acceleration and the longitudinal acceleration of each vehicle at each sampling pointt
Step 3-2): aggregating the multi-space-time map combined feature vectors contained in the time window of the observation time point T into a time sequence X by taking T multiplied by 0.1s as the time window durationtAnd inputting the sample as a sample of the long-short term memory neural network LSTM at a time point t:
Xt={Ht-T+1,Ht-T+2,...,Ht}
wherein T represents the past T multiplied by 0.1s time window from the observation time point T, and the sample data of the vehicle historical driving track is sampled at the frequency of 10Hz, so that a total
Figure BDA0003228521200000131
Sampling ofAnd (4) point.
And outputting the multiple-vehicle collision risk state (namely the sample output of the LSTM at the time point T) in the future T' multiplied by 0.1s from the time point T in a sense full-connection mode after the LSTM hidden layer, wherein the state comprises four types of high risk, medium risk and low risk. To prevent model overfitting, a random deactivation Dropout was added between the LSTM hidden layer and the density fully connected output layer, and LSTM improved the overfitting case in conjunction with regularization using input connection weights L2 and a learning rate decay technique. The multi-workshop collision risk sample state dividing method is specifically realized by the following substeps:
step 3-2-1): based on the historical driving track data of the vehicle, the longitudinal collision distance SDI according to the step 1)xThe calculation method obtains the perimeter vehicle C in the time window of T' × 0.1s from the beginning of the observation time point TiAnd bicycle C0Set of longitudinal collision distance indices { SDIxi-0(t+1),SDIxi-0(t+2),...,,SDIxi-0(T + T') }, i 1, 2. Wherein T 'represents that sampling data of the historical driving track of the vehicle is carried out by taking 10Hz as the frequency in a time window of T' multiplied by 0.1s in the future from the observation time point T, and a total value can be obtained
Figure BDA0003228521200000141
And (4) sampling points.
Step 3-2-2): calculating bicycle C0And vehicle CiCollision probability index in future T' × 0.1s time window:
Figure BDA0003228521200000142
wherein R is0(i) For SDI within a time windowxi-0The number of observation points is less than or equal to 0.
Step 3-2-3): calculating bicycle C0And vehicle CiCrash severity indicator in the future T' × 0.1s time window:
Figure BDA0003228521200000143
wherein SDImax(i) For SDI within a time windowxi-0The observed value with the maximum absolute value under the condition of less than or equal to 0, SDIcriIs SDIxi-0The maximum possible value of the absolute value under the condition of less than or equal to 0. Bicycle C with handlebar0And adjacent vehicle CiThe speed of the vehicle is respectively the lowest value and the highest value of the speed limit of the highway, such as when the vehicle C0Take V at the frontx0=60km/h,Vxi120km/h, and then calculating the SDI according to the method in the step 1-2-2)cri
Step 3-2-4): referring to FIG. 3, based on the fault tree principle, the self vehicle and the surrounding vehicles CiThe collision probability and the collision severity of (i ═ 1.., 6) are basic events for the host vehicle and the surrounding individual vehicles Ci(i ═ 1...., 6) the collision risk between the two vehicles is an intermediate event, the total collision risk between the own vehicle and the surrounding multi-vehicle scene is an overhead event, and the system risk index of the total collision risk overhead event (i.e. system failure) in the future T' × 0.1s time window is calculated:
Figure BDA0003228521200000144
wherein etai=Pc(i)·Sc(i) For the vehicles C in the future T' x 0.1s time windowiThe product of the collision probability and the collision severity of (c); constant kiAnd (3) taking different values according to the lane keeping or changing behavior of the vehicle in the time window, wherein the values are 0 or 1: if the vehicle takes the action of keeping the lane, k1=k2=1,k3=k4=...=k60; if the self-vehicle takes the left lane-changing action, k1=k2=k3=k4=1,k5=k60; if the vehicle takes the right lane-changing action, k1=k2=k5=k6=1,k3=k40. The judgment standard for the behavior of keeping or changing lanes adopted by the self vehicle is as follows: setting the transverse distance between the center of mass of the bicycle and the center line of the road as x at the starting moment of the time window0At the end of the time window, x1The lane width is w, if | x1-x0If the | is less than or equal to w/3, keeping the lane by the self vehicle; if x1-x0If the ratio is less than-w/3, changing lanes from the left side of the vehicle; if x1-x0If the speed is more than w/3, the lane is changed from the right side of the vehicle.
Step 3-2-5): the calculated system risk index value and a risk threshold value of a corresponding type (keeping lane type threshold values are Th respectively)k1、Thk2、Thk3(ii) a The threshold values of lane change types are Thc1、Thc2、Thc3) A comparison is made to determine the sample output state. If the future T' × 0.1s time window keeps the lane from the vehicle: when in use
Figure BDA0003228521200000151
The multi-workshop collision risk sample state is high risk; when in use
Figure BDA0003228521200000152
The multi-workshop collision risk sample state is medium and high risk; when in use
Figure BDA0003228521200000153
The multi-workshop collision risk sample state is medium risk; when in use
Figure BDA0003228521200000154
The multi-car collision risk sample status is low risk. If the future T' × 0.1s time window changes lanes from vehicle: when in use
Figure BDA0003228521200000155
The multi-workshop collision risk sample state is high risk; when in use
Figure BDA0003228521200000156
The multi-workshop collision risk sample state is medium and high risk; when in use
Figure BDA0003228521200000157
The multi-workshop collision risk sample state is medium risk; when in use
Figure BDA0003228521200000158
The multi-car collision risk sample status is low risk.
The magnitude of the risk state threshold may be determined and adjusted according to the sample system risk indicator value ranking percentages. Dividing the vehicle historical driving track data acquired by the unmanned aerial vehicle aerial photographing identification according to the method in the steps 3-2-1) to 3-2-4), and calculating to obtain system risk index values of N samples
Figure BDA0003228521200000159
Two types of pairs for keeping lane and changing lane according to own vehicle
Figure BDA00032285212000001510
Respectively sorting, and respectively taking the system risk index values corresponding to the first 5%, 15% and 25% of the two types of sorting as high risk, medium risk and medium risk threshold { Thk1,Thk2,Thk3} (keep lane type) and { Thc1,Thc2,Thc3-lane change type.
Step 3-3): and dividing the historical vehicle running track data acquired by the aerial photographing identification of the unmanned aerial vehicle by using 0.2s as a time window moving step length according to the method in the step 3-2), and finally obtaining N' risk prediction input and output samples of the multi-vehicle scene.
Step 3-4): and (3) training to obtain a multi-space-time diagram-based driving risk GCN-LSTM prediction model by taking the number of layers of the GCN in the step (2) and the LSTM hidden layer number, the hidden layer node number, the random inactivation Dropout rate, the L2 regularization coefficient and the learning rate attenuation coefficient in the step (3-2) as model hyper-parameters, taking a convolution weight matrix in the GCN and an input weight matrix and an offset term in an LSTM memory unit as model training parameters and based on N' input and output samples obtained in the step (3-3).
Step two, predicting the online risk model in real time
The method comprises the steps of obtaining motion information of a vehicle and a plurality of surrounding vehicles in real time through interaction of a sensor and V2X in an internet of vehicles environment, extracting a multi-spatiotemporal map feature vector sequence among all vehicles in an observation period in real time through the multi-spatiotemporal map feature vector extraction method in the step I, inputting the driving risk prediction model in the step I, and finally obtaining a multi-workshop collision risk state prediction result at a future moment. Specifically, the second step comprises the following steps:
step (1), orienting to the multi-vehicle scene { O: C in step one0,C1,...,C6Acquiring motion information of vehicles and a plurality of surrounding vehicles in real time through interaction of a sensor and V2X in an internet of vehicles environment, wherein the motion information comprises longitudinal and transverse positions, longitudinal and transverse speeds and longitudinal and transverse accelerations F of each vehicle in the step onei={dxi-0,dyi-0,Vxi,Vyi,axi,ayiAnd length of vehicle LiAnd width Wi,i=0,1,...,6。
Step (2), extracting a multi-spatiotemporal image joint feature vector H corresponding to T sampling time points T-T +1, T-T +2, T (the sampling frequency is 0.1s) in a time window of the predicted time point T by the multi-spatiotemporal image feature vector extraction method of the step onet-T+1,Ht-T+2,...,HtInputting the multi-vehicle collision risk GCN-LSTM prediction model obtained by training in the step one, and outputting multi-vehicle collision risk states (four states including high risk, medium risk and low risk) within the future T' x 0.1s prediction duration in real time.
The present invention is not limited to the above-described embodiments, and any obvious improvements, substitutions or modifications can be made by those skilled in the art without departing from the spirit of the present invention.

Claims (9)

1. A complex scene driving risk prediction method based on a multi-space-time diagram is characterized by comprising the following steps:
s1, taking a self vehicle and peripheral multiple vehicles at a certain moment as nodes in the graph, taking the position, speed and acceleration of the vehicle as node characteristics, constructing a node adjacency matrix reflecting different space-time relations among the vehicles, obtaining a multi-space-time graph describing a peripheral multiple vehicle complex scene by using the nodes, the node characteristics and the node adjacency matrix in the graph, inputting the fused multi-space-time graph into a graph convolution neural network, and extracting a multi-space-time graph feature vector of the scene; taking a multi-spatio-temporal map feature vector sequence extracted at each moment in an observation period as a multi-step input feature of the long-short term memory neural network, and training multi-vehicle spatio-temporal sequence samples in different risk states to obtain a driving risk prediction model;
and S2, acquiring motion information of the vehicle and the surrounding vehicles in real time, extracting a multi-spatiotemporal map feature vector sequence among all vehicles in an observation period in real time, inputting the multi-spatiotemporal map feature vector sequence into the driving risk prediction model, and finally obtaining a multi-workshop collision risk state prediction result at a future moment.
2. The multi-spatio-temporal graph-based complex scene driving risk prediction method according to claim 1, wherein the node adjacency matrix comprises a time relation matrix based on Time To Collision (TTC) and a space relation matrix based on parking visibility range (SSD).
3. The complex scene driving risk prediction method based on the multi-space-time diagram according to claim 2, wherein the time relation matrix based on the Time To Collision (TTC) is specifically:
1) calculate multiple cars C0,C1,...,C6Middle adjacent vehicle CiAnd CjTime to collision TTC betweeni-j
Figure FDA0003228521190000011
Figure FDA0003228521190000012
Wherein: TTCxi-jFor time to longitudinal collision, TTCyi-jAs transverse collision time, dxi-jIs a vehicle C with the longitudinal axis x downwardiCenter of mass relative to vehicle CjRelative longitudinal distance of center of mass, dyi-jVehicle C in the y direction of the horizontal axisiCenter of mass relative to vehicle CjRelative transverse distance of center of mass, VxiAnd VyiFor vehicle CiAbsolute longitudinal velocity, absolute lateral velocity, LiAnd LjAre respectively Ci、CjLength of vehicle, WiAnd WjAre respectively Ci、CjWidth of vehicle of epsilonxAnd εyIs a random deviation term;
2) when TTCi-jIf < 0, let the corresponding time to collision index TTC'i-jGetting infinity:
Figure FDA0003228521190000013
Figure FDA0003228521190000014
3) based on TTC'i-jConstruction of multiple cars C0,C1,...,C6Middle adjacent vehicle CiAnd CjTime relation index TD betweeni-j
Figure FDA0003228521190000021
Figure FDA0003228521190000022
Wherein, TDxi-jAs an indication of longitudinal time relationship, TDyi-jAs an indicator of the transverse time relationship, σxAnd σyNormalized constants for the machine direction and the transverse direction;
4) by time-related index TDxi-j、TDyi-jFor weighting, respectively constructing vertical time relation adjacency matrixes ATxAnd a transverse time relationship adjacency matrix ATy
Wherein A isTx、ATyAre all symmetric matrices with a main diagonal element of 0, and if vehicle CiAnd CjIf the positions in the multi-vehicle scene are not adjacent, the element in the corresponding adjacent matrix is 0;
5) obtain a multiple-vehicle scene { O: C0,C1,...,C6Time relation undirected graph G ofTxAnd GTy
GTx=(VTx,ETx)
GTy=(VTy,ETy)
Wherein the weights of the two time-relation undirected graphs are respectively ATxAnd ATyNode VTx=VTy={C0,C1,...,C6}, connecting edge ETx=ETy={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6}。
4. The complex scene driving risk prediction method based on the multi-spatiotemporal map as claimed in claim 3, wherein the spatial relationship matrix based on the parking visibility range SSD is specifically:
1) calculate multiple cars C0,C1,...,C6Each vehicle CiParking visibility SSDi
Figure FDA0003228521190000023
Figure FDA0003228521190000024
Wherein the SSDxiFor longitudinal parking line of sight, SSDyiFor transverse parking vision distance, VxiAnd VyiAre respectively a vehicle CiAbsolute longitudinal velocity, absolute lateral velocity, fxAnd fyRespectively longitudinal friction coefficient and transverse friction coefficient, g is gravitational acceleration, trReaction time for the driver;
2) according to multiple vehicles C0,C1,...,C6Middle adjacent vehicle CiAnd CjRelative position relation between the two vehicles, and calculate the collision distance SDI between the two vehiclesi-j
Figure FDA0003228521190000031
Figure FDA0003228521190000032
Wherein, SDIxi-jFor longitudinal collision distance, SDIyi-jIs the lateral collision distance;
3) based on collision distance SDIi-jConstruction of multiple cars C0,C1,...,C6Middle adjacent vehicle CiAnd CjSpatial relationship index SD betweeni-j
Figure FDA0003228521190000033
Figure FDA0003228521190000034
Wherein, SDxi-jAs an index of longitudinal spatial relationship, SDyi-jIs a transverse spatial relationship index;
4) by a spatial relationship index SDxi-j、SDyi-jFor weighting, respectively constructing a vertical spatial relationship adjacency matrix ASxAnd a transverse spatial relationship adjacency matrix ASy
Wherein A isSxAnd ASyAre all symmetric matricesAnd the main diagonal element is 0, and if vehicle CiAnd CjIf the positions in the multi-vehicle scene are not adjacent, the element in the corresponding adjacent matrix is 0;
5) to ASxAnd ASyPerforming standardization treatment
A′Sx=D(ASx)-1ASx
A′Sy=D(ASy)-1ASy
Wherein D (-) is a normalization coefficient function, and:
Figure FDA0003228521190000035
wherein A isi,jElements corresponding to the ith row and the jth column in the matrix A are shown;
6) obtain a multiple-vehicle scene { O: C0,C1,...,C6} spatial relationship undirected graph GSxAnd GSy
GSx=(VSx,ESx)
GSy=(VSy,ESy)
Wherein the weights of the two spatial relationship undirected graphs are respectively A'SxAnd A'SyNode VSx=VSy={C0,C1,...,C6}, connecting edge ESx=ESy={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6}。
5. The multi-space-time diagram-based complex scene driving risk prediction method according to claim 4, wherein the process of fusing the multi-space-time diagrams is as follows: g is to beTx、GTy、GSxAnd GSyAccording to the weight vector (W)Tx,WTy,WSx,WSy) And (3) performing weighted fusion:
Af=WTxATx+WTyATy+WSxA′Sx+WSyA′Sy
wherein, WTx,WTy,WSx,WSyBelongs to (0,1) as a self-learning space-time diagram weight coefficient and satisfies a relation WTx+WTy+WSx+WSy1 is ═ 1; finally, a multi-vehicle scene { O: C:0,C1,...,C6mixed space-time relationship diagram G off
Gf=(Vf,Ef)
Wherein: the weight of the mixed space-time relationship graph is AfNode Vf={C0,C1,...,C6}, connecting edge Ef={C0C1,C0C2,C0C3,C0C4,C0C5,C0C6,C1C3,C1C5,C2C4,C2C6,C3C4,C5C6}, the node feature vector is Fi=(dxi-0,dyi-0,Vxi,Vyi,axi,ayi) I is 0,1,. 6; merging the characteristics of all nodes to obtain a graph GfInitial graph feature matrix of (1):
Figure FDA0003228521190000041
6. the method for predicting driving risk in complex scene based on multi-space-time diagram according to claim 5, wherein the characteristic propagation rule of each layer network diagram of the graph convolution neural network is as follows:
Hl+1=σ[D(Af)-1AfHlWl+HlBl]
wherein: sigma is a Sigmoid activation function; hlThe first layer diagram characteristic and the 0 th layer diagram characteristic are initial diagram characteristics; wlIs a self-learning first layer convolution weight matrix; b islIs a self-learnable weight matrix;
connecting the k-layer characteristics and the initial graph characteristics in series to obtain a multi-vehicle scene { O: C0,C1,...,C6The joint feature vector of the multi-space-time diagram: h ═ H0,H1,...,Hk)。
7. The method for predicting driving risk in complex scene based on multi-space-time diagram according to claim 5, wherein the obtaining process of the multi-vehicle space-time sequence sample input of different risk states comprises:
(1) acquiring the transverse relative distance, the longitudinal relative distance, the transverse speed, the longitudinal speed, the transverse acceleration and the longitudinal acceleration of each vehicle at each sampling point in the past T multiplied by 0.1s time window from the observation time point T based on the historical driving track data of the vehicles; t represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in the past T multiplied by 0.1s time window from the observation time point T;
(2) calculating the multi-space-time map combined characteristic vector of the multi-vehicle scene at each sampling point, and recording HtCombining the time sequence X for the multi-space-time diagram of the multi-vehicle scene at the sampling point t with the characteristic vectort={Ht-T+1,Ht-T+2,...,HtAnd inputting as multi-vehicle space-time sequence samples.
8. The method for predicting driving risk of complex scene based on multi-space-time diagram according to claim 1, wherein the method for determining the output state of the multi-vehicle space-time sequence sample in different risk states comprises:
(1) based on the historical driving track data of the vehicle, the peripheral vehicle C in the time window of T' multiplied by 0.1s from the beginning of the observation time point T is obtainediAnd bicycle C0Set of longitudinal collision distance indices { SDIxi-0(t+1),SDIxi-0(t+2),...,,SDIxi-0(t+T′)},i=1,2,…,6;T 'represents the number of sampling points obtained by sampling the historical driving track data of the vehicle in a time window of T' × 0.1s in the future from the observation time point T;
(2) calculating bicycle C0And vehicle CiCollision probability indicator in future T' x 0.1s time window
Figure FDA0003228521190000051
Wherein R is0(i) For SDI within a time windowxi-0The number of observation points is less than or equal to 0;
(3) calculating bicycle C0And vehicle CiCrash severity indicator in future T' × 0.1s time window
Figure FDA0003228521190000052
Wherein SDImax(i) For SDI within a time windowxi-0The observed value with the maximum absolute value under the condition of less than or equal to 0, SDIcriIs SDIxi-0The maximum possible value of the absolute value under the condition of less than or equal to 0;
(4) with the own vehicle and each vehicle C aroundiThe collision probability and the collision severity of the vehicle are basic events for the own vehicle and the surrounding vehicles CiTaking the total collision risk of the own vehicle and the surrounding multi-vehicle scene as an overhead event, and calculating the system risk index of the total collision risk overhead event in a future T' multiplied by 0.1s time window:
Figure FDA0003228521190000053
wherein etai=Pc(i)·Sc(i) For the vehicles C in the future T' x 0.1s time windowiThe product of the collision probability and the collision severity, constant kiTaking different values according to the lane keeping or changing behavior of the vehicle in the time window;
(5) comparing the system risk index value with the risk threshold value of the corresponding type, and determining the output state of the multi-workshop collision risk sample; wherein the size of the risk state threshold may be determined and adjusted according to the system risk indicator value ranking percentage.
9. The multi-spatio-temporal graph-based complex scene driving risk prediction method according to claim 1, wherein the driving risk prediction model is a multi-spatio-temporal graph-based driving risk GCN-LSTM prediction model, and specifically, the number of layers of a graph convolution neural network, the number of hidden layers, the number of hidden layer nodes, a random inactivation Dropout rate, an L2 regularization coefficient, and a learning rate attenuation coefficient of an LSTM are used as prediction model hyper-parameters, a convolution weight matrix in the graph convolution neural network and an input weight matrix and a bias term in an LSTM memory unit are used as prediction model training parameters, and a multi-spatio-temporal sequence sample based on different risk states is finally trained to obtain a multi-spatio-temporal graph-based driving risk GCN-LSTM prediction model.
CN202110979340.3A 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram Active CN113762473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110979340.3A CN113762473B (en) 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110979340.3A CN113762473B (en) 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram

Publications (2)

Publication Number Publication Date
CN113762473A true CN113762473A (en) 2021-12-07
CN113762473B CN113762473B (en) 2024-04-12

Family

ID=78791124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110979340.3A Active CN113762473B (en) 2021-08-25 2021-08-25 Complex scene driving risk prediction method based on multi-time space diagram

Country Status (1)

Country Link
CN (1) CN113762473B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333416A (en) * 2021-12-24 2022-04-12 阿波罗智能技术(北京)有限公司 Vehicle risk early warning method and device based on neural network and automatic driving vehicle
CN114613127A (en) * 2022-02-10 2022-06-10 江苏大学 Driving risk prediction method based on multi-layer multi-dimensional index system
CN114707856A (en) * 2022-04-02 2022-07-05 河南鑫安利安全科技股份有限公司 Risk identification analysis and early warning system based on computer vision
CN116246492A (en) * 2023-03-16 2023-06-09 东南大学 Vehicle lane change collision risk prediction method based on space-time attention LSTM and super-threshold model
CN116978236A (en) * 2023-09-25 2023-10-31 南京隼眼电子科技有限公司 Traffic accident early warning method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110352153A (en) * 2018-02-02 2019-10-18 辉达公司 It is analyzed in autonomous vehicle for the security procedure of Obstacle avoidance
CN112686281A (en) * 2020-12-08 2021-04-20 深圳先进技术研究院 Vehicle track prediction method based on space-time attention and multi-stage LSTM information expression
CN113642522A (en) * 2021-09-01 2021-11-12 中国科学院自动化研究所 Audio and video based fatigue state detection method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110352153A (en) * 2018-02-02 2019-10-18 辉达公司 It is analyzed in autonomous vehicle for the security procedure of Obstacle avoidance
CN112686281A (en) * 2020-12-08 2021-04-20 深圳先进技术研究院 Vehicle track prediction method based on space-time attention and multi-stage LSTM information expression
CN113642522A (en) * 2021-09-01 2021-11-12 中国科学院自动化研究所 Audio and video based fatigue state detection method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333416A (en) * 2021-12-24 2022-04-12 阿波罗智能技术(北京)有限公司 Vehicle risk early warning method and device based on neural network and automatic driving vehicle
CN114613127A (en) * 2022-02-10 2022-06-10 江苏大学 Driving risk prediction method based on multi-layer multi-dimensional index system
CN114707856A (en) * 2022-04-02 2022-07-05 河南鑫安利安全科技股份有限公司 Risk identification analysis and early warning system based on computer vision
CN116246492A (en) * 2023-03-16 2023-06-09 东南大学 Vehicle lane change collision risk prediction method based on space-time attention LSTM and super-threshold model
CN116246492B (en) * 2023-03-16 2024-01-16 东南大学 Vehicle lane change collision risk prediction method based on space-time attention LSTM and super-threshold model
CN116978236A (en) * 2023-09-25 2023-10-31 南京隼眼电子科技有限公司 Traffic accident early warning method, device and storage medium
CN116978236B (en) * 2023-09-25 2023-12-15 南京隼眼电子科技有限公司 Traffic accident early warning method, device and storage medium

Also Published As

Publication number Publication date
CN113762473B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN113762473B (en) Complex scene driving risk prediction method based on multi-time space diagram
Altché et al. An LSTM network for highway trajectory prediction
US11475351B2 (en) Systems and methods for object detection, tracking, and motion prediction
Gruyer et al. Perception, information processing and modeling: Critical stages for autonomous driving applications
Gu et al. A novel lane-changing decision model for autonomous vehicles based on deep autoencoder network and XGBoost
DE102020133744A1 (en) FOREGROUND EXTRACTION USING AREA ADJUSTMENT
GB2608567A (en) Operation of a vehicle using motion planning with machine learning
CN101633358A (en) Adaptive vehicle control system with integrated driving style recognition
GB2600196A (en) Vehicle operation using a dynamic occupancy grid
WO2022134711A1 (en) Driving style recognition method, assisted driving method, and apparatuses
CN110182217A (en) A kind of traveling task complexity quantitative estimation method towards complicated scene of overtaking other vehicles
DE102021124913A1 (en) METRIC BACKPROPAGATION FOR EVALUATION OF SUBSYSTEMS PERFORMANCE
US20230005173A1 (en) Cross-modality active learning for object detection
GB2605463A (en) Selecting testing scenarios for evaluating the performance of autonomous vehicles
Saunier et al. Mining microscopic data of vehicle conflicts and collisions to investigate collision factors
CN114932918A (en) Behavior decision method and system for intelligent internet vehicle to drive under various road conditions
CN110097571B (en) Quick high-precision vehicle collision prediction method
Gao et al. Discretionary cut-in driving behavior risk assessment based on naturalistic driving data
Wang et al. ARIMA model and few-shot learning for vehicle speed time series analysis and prediction
Azadani et al. Toward driver intention prediction for intelligent vehicles: A deep learning approach
Ma et al. Prediction and analysis of likelihood of freeway crash occurrence considering risky driving behavior
Selvaraj et al. Edge learning of vehicular trajectories at regulated intersections
Zhao et al. Improving Autonomous Vehicle Visual Perception by Fusing Human Gaze and Machine Vision
Rakha et al. Agent-based game theory modeling for driverless vehicles at intersections
CN113276860B (en) Vehicle control method, device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant