CN113954864A - Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information - Google Patents

Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information Download PDF

Info

Publication number
CN113954864A
CN113954864A CN202111105338.XA CN202111105338A CN113954864A CN 113954864 A CN113954864 A CN 113954864A CN 202111105338 A CN202111105338 A CN 202111105338A CN 113954864 A CN113954864 A CN 113954864A
Authority
CN
China
Prior art keywords
lane
information
vehicle
map
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111105338.XA
Other languages
Chinese (zh)
Other versions
CN113954864B (en
Inventor
蔡英凤
胡启慧
滕成龙
饶中钰
王海
陈龙
李祎承
刘擎超
孙晓强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202111105338.XA priority Critical patent/CN113954864B/en
Publication of CN113954864A publication Critical patent/CN113954864A/en
Application granted granted Critical
Publication of CN113954864B publication Critical patent/CN113954864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/40High definition maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Transportation (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an intelligent automobile track prediction system and method fusing peripheral vehicle interaction information, and belongs to the technical field of intelligent driving. The invention provides a graph convolution neural network considering peripheral vehicle interaction, which solves the problem that the information interaction of peripheral vehicles is not considered in the conventional track prediction algorithm. A method for extracting map information by means of a high-definition vector map instead of a bird's-eye view is provided, the vector map is used for defining the geometric shape of a lane, and the problem of prediction discretization caused by a resolution problem is reduced. A mode of fusing the space-time relationship between the vehicle and the driving scene is provided, new lane characteristics are introduced to represent the generalized geometric relationship between the vehicle and the lanes, and the accuracy of track prediction in the case of facing lanes of different shapes and numbers is effectively improved. A multi-Seq 2Seq structure stacking mode is provided for predicting the probability of selecting a multi-modal track and different lanes of a vehicle in the future and improving the limitation of single-track output.

Description

Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to an intelligent automobile track prediction system and method fusing peripheral vehicle interaction information.
Background
With the development of intelligent automobile technology and the rise of 5G communication technology, students at home and abroad increasingly research automatic driving of vehicles, and one of the main purposes of the research is to reduce traffic accidents. The decision-making system is used as a core part of the automatic driving technology of the vehicle, needs to predict a driving track capable of avoiding surrounding obstacles in real time, and is important for safe driving of the vehicle. The system is an automatic driving brain, and a safe and reasonable optimal track is planned for the intelligent automobile mainly according to the driving information of the automobile sensed by an automobile sensor and other traffic subject information such as the position, the speed and the lane line of the surrounding vehicles acquired based on V2X.
At present, the main research direction of a vehicle decision system is oriented to the self-vehicle state of an intelligent vehicle, and a future track is predicted according to collected historical track data. The methods used can be divided into two categories, including methods based on traditional physical models and methods based on neural network predictions. One is a method based on a traditional physical model, and the general use models are as follows: the method comprises the steps that a constant speed model, a bicycle model and a Kalman filtering model are used for generating a future track of a predicted vehicle according to historical data representing physical actions, but influence factors of surrounding vehicles are rarely considered in the method, and parameters need to be adjusted in each situation, so that real-time performance and accuracy cannot be well guaranteed; the other type is a method based on neural network prediction, and the neural networks used mainly comprise: the method is characterized in that a cyclic neural network RNN, a long-term and short-term memory neural network LSTM, a convolutional neural network CNN and the like are used, future tracks are generated by coding and decoding based on historical tracks of vehicles, the effect of the method is proved to be superior to that of a method based on a traditional physical model, but environmental data characteristics are not fully mined, and interaction information between the vehicles and the surrounding environment cannot be well utilized.
In fact, smart cars must share roads with surrounding vehicles while traveling, and their travel trajectories are also affected and constrained by the road environment, e.g., lane geometry, crosswalks, traffic lights, and the behavior of other vehicles. In view of the above, on the basis of the existing neural network method, the invention provides an intelligent automobile track prediction method which integrates interactive information of surrounding vehicles and is designed by considering the influence of the driving scene of the vehicle and the surrounding vehicle environment on track prediction and combining a dynamic graph neural network and a lane graph neural network.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide an intelligent automobile track prediction system and method fusing peripheral vehicle interaction information, so as to solve the problem that the track prediction accuracy is influenced by neglecting the interaction between a self automobile and peripheral vehicles in the prior art, and provide guarantee for the safe and efficient running of an intelligent automobile.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention discloses an intelligent automobile track prediction model system fusing peripheral vehicle interaction information, which comprises four modules as shown in figure 1, wherein the four modules are respectively as follows: the device comprises a vehicle interactive relation extraction module, a driving scene representation module, a time-space relation fusion module and a track prediction module. The vehicle interactive relation extraction module is used for defining an influence threshold value of 10 meters from vehicle historical track data sensed by a sensor, mainly referring to position coordinates, constructing an interactive graph representing the interactive relation between the vehicle and surrounding vehicles, transmitting the original track sequence coordinates and the interactive graph into a GCN (gateway computer network), and outputting track data representing the interactive relation graph between the vehicles.
The driving scene representation module constructs an interactive relation graph among the lane sections according to the originally sensed map information M, namely, the interactive relation graph represents the front section, the subsequent section, the left adjacent section and the right adjacent section of the lane, and then the interactive relation graph and the original map information M are transmitted into a lane graph convolution network together to output map data representing lane interactive relation.
The space-time relation fusion module fuses the data of the two modules, transmits the track information representing the interactive relation graph between the vehicles to map data, and grasps the traffic jam or traffic lane use condition; then updating the map data information fused with the track information at the moment through a lane graph convolution network to realize real-time interconnection between lane segments, and outputting map characteristic data implicitly containing vehicle information; and finally, feeding back the updated real-time map features and the original track information to the vehicle, and outputting the historical track information with the real-time map interaction and the peripheral vehicle interaction in an implicit expression mode.
The track prediction module inputs the historical track data fused by the time-space relation fusion module, two-dimensional track coordinates of the vehicle at the future moment are decoded by processing of the encoder and the decoder, and meanwhile, classification loss is set and multi-mode is output by stacking a plurality of codes and decoders. The last output trajectory coordinates are therefore represented as sets of future trajectory values, representing multiple possible future trajectories for the same vehicle.
Further, the vehicle interaction relation extraction module includes: and constructing a vehicle interactive map G and a map convolution network GCN. Constructing a vehicle interaction graph G, receiving historical track information X of a self vehicle and surrounding vehicles, describing the interaction relation of the vehicles on the time and space level in a graph matrix form, and inputting the graph matrix and the historical track X into a graph convolution GCN network to capture the complex interaction between different traffic vehicles to obtain the historical track information with interaction information;
further, the driving scene representation module comprises a high-definition vector map M, an interactive lane map graph, a lane map convolution GCN and a full connection layer FC 1. Considering the influence of driving scene information (including lane center lines, steering and traffic control) on the target vehicle track, acquiring lane information of a high-definition vector map M by using an interactive lane map, inputting the lane information and the high-definition vector map M into a lane map convolution GCN network, and extracting map characteristic information through a full-connection layer FC 1;
further, the space-time relationship fusion module is divided into three units, wherein the first unit receives historical track information and map characteristic information, introduces real-time vehicle information to a lane node through a layer of Attention mechanism Attention and a full connection layer FC2, obtains the service condition of a lane, and outputs map data containing the historical track information of the vehicle; the second unit receives the output of the first unit, historical track information and map feature information, and updates lane node features by transmitting lane information through a lane graph convolution GCN layer and a full-connection layer FC 3; and the third unit receives the historical track information and the map characteristic information, and performs real-time traffic information fusion with the updated characteristics of the second unit through Attention mechanism Attention and full connection layer FC 4. The three units acquire information flow from the vehicle to the lane, from the lane to the lane and from the lane to the vehicle by constructing a stack of sequential circulating fusion blocks, so that the transmission of real-time traffic information is realized, and finally the track information of the vehicle is output to a track prediction module;
further, the trajectory prediction module includes an encoder GRU of Seq2Seq, a decoder GRU, and a last observed frame coordinate. Firstly, an encoder GRU receives fusion characteristic information from a space-time relation fusion module as input, performs time dimension encoding, then inputs the fusion characteristic information and observation frame coordinates into a decoder GRU, and repeatedly decodes BEV track coordinate values of future time step lengths. And in addition, a classification branch is used for predicting the confidence score of each mode to obtain K mode tracks of the vehicle.
The invention also provides an intelligent automobile track prediction method fusing the peripheral vehicle interaction information, which comprises the following steps:
s1: firstly, preprocessing an interaction graph G between input historical tracks of a predicted vehicle and peripheral vehicles and the vehicle; processing the history track into n x thThree-dimensional array form of x c, where n represents the pastObserve n objects in the traffic scene, thReferring to a history period, c-2 denotes x and y coordinates of the object;
representing an interaction graph G between vehicles as G ═ V, E, where V represents a node of the graph, i.e. the vehicle under observation, and the feature vector on the node is the coordinates of the object at time t; e represents the interactive connecting edge between vehicles and is represented by an adjacency matrix; wherein, considering that there are connecting edges between vehicles on the space-time characteristic, namely, the connecting edges of the interaction between different vehicles due to the distance influence on the space and the connecting edges of each vehicle and the historical time of the vehicle, the interaction graph G is expressed as an adjacency matrix:
G={A0,A1}
wherein A is0Is a time-connected edge adjacency matrix, and A1Is a spatially connected edge adjacency matrix;
s2: mapping the historical track and the interactive graph G to a high-dimensional convolution layer through a layer of two-dimensional convolution layer; then performing space-time interaction through the two layers of graph volume layers; the convolution kernel of space-time interaction comprises two parts, namely an interaction graph G of a current observation frame and a training graph G with the same size as Gtrain(ii) a From G and GtrainExtracting space mutual information by the convolution network with the sum as convolution kernel, and then making nxt on the time convolution layer with the fixed convolution kernel size of (1 x 3) on the time levelhProcessing data along time dimension by x c dimension data, and after alternately processing spatial layer and time layer, outputting n x t dimension datahTrajectory data of the inter-vehicle interaction map of xc;
s3: extracting features according to the map data, and obtaining a structured map representation from the vectorized map data;
s3.1: firstly, constructing a lane map according to map data: according to the acquired lane data center line lcenThe lane data center line lcenRepresenting as a series of two-dimensional aerial view angle coordinate points, acquiring any two connected lane information, namely left adjacent, right adjacent, front section and successor, processing the two connected lane information into four connected dictionaries corresponding to lane ids, and respectively representing the previous section of lane L of the given lane LFront sectionRear connecting lane LSubsequent operationLeft adjacent lane LLeft adjacent toAnd the right adjacent lane LRight adjacent toThereby obtaining a lane map;
s3.2: then, the lane map and the features in the map data are compared, and the features comprise: lane sequence number lidSequence points of lane center line lcenLane steering situation lturnWhether there is traffic control l in the laneconWhether the lane is an intersection linterInputting the data into a lane graph convolution GCN network together, and outputting map data containing lane interaction relation;
s4: the method for fusing the trajectory data of the inter-vehicle interaction relationship diagram output in the step S2 and the map data containing the lane interaction relationship output in the step S3.2 comprises the following steps:
(1) fusing vehicle information to lane nodes to master the lane congestion condition;
(2) information fusion and updating among the lane nodes so as to realize real-time interconnection among the lane sections;
(3) fusing and feeding back the map data characteristics and the real-time traffic information to the vehicle;
the information updating among the lane nodes of the part (2) adopts a lane graph convolution GCN mode, and an adjacent matrix with lane information is used for constructing graph convolution to extract lane interaction information;
mutual transmission between vehicle information and lane information, namely the (1) th and (3) th parts, extracting interactive features of three types of information, namely input lane features, vehicle features and context node information, through a spatial attention mechanism; the context node is defined as a lane node and a vehicle node2Channel nodes with distances less than a threshold;
the network of part (1) is arranged to: forming new map characteristic information and two-dimensional characteristic data of a vehicle as unit input data by using the position information of an n multiplied by 128 two-dimensional lane and the property characteristics of the n multiplied by 4 dimensional lane, outputting the lane characteristics with vehicle information after two layers of stacks of a graph attention mechanism and one layer of full connection, and keeping the dimension n multiplied by 128; the lane property characteristics include whether to turn, whether to have traffic control, and whether to be an intersection;
the part (3) is the same as the part (1) in network setting, and finally vehicle characteristic information containing lane information and lane interaction information is output, and the dimension output is also kept to be n x 128;
s6: outputting final motion trail prediction according to the vehicle characteristic information after S5 fusion; specifically, the method comprises the following steps:
for each vehicle agent, K possible future trajectories and corresponding confidence scores are predicted, the prediction comprising two branches: one regression branch predicts the trajectory of each mode, and the other classification branch predicts the confidence score of each mode; for the nth participant, applying a Seq2Seq structure in a regression branch to regress the K sequence of BEV coordinates by the following specific process: firstly, the dimension of the fused vehicle features is expanded to be nxthInputting the Xc into a Seq2Seq structure network, and feeding a vector representing the characteristics of the vehicle to a corresponding input unit of an encoder; the hidden features of the encoder are then fed to the decoder together with the coordinates of the vehicle in the previous time step to predict the position coordinates of the current time step, in particular the input to the first decoding step is the coordinates of the vehicle in the "last historical moment" step and the output of the current step is fed to the next decoder unit, and the decoding process is repeated several times until the model predicts the position coordinates of all expected time steps in the future.
Further, in S2, the graph convolution is defined as Y ═ LXW, where
Figure BDA0003272061070000051
The characteristics of the nodes are represented by,
Figure BDA0003272061070000052
a matrix of weights is represented by a matrix of weights,
Figure BDA0003272061070000053
representing output, N representing total number of input nodes, F representing characteristic number of input nodes, O representing characteristic number of output nodes, and graph Laplace matrix
Figure BDA0003272061070000054
The expression of (a) is:
Figure BDA0003272061070000055
where I, A and D are the identity matrix, adjacency matrix and degree matrix, respectively, I and A denote the self-connection and the connection between different nodes, all connections share the same weight W, and the degree matrix D is used to normalize the data.
Further, before performing the graph convolution in S2, the interaction graph G is normalized:
Figure BDA0003272061070000056
wherein, A refers to an adjacency matrix, D refers to a degree matrix, and j refers to a data sequence. A. thejRepresenting an adjacency matrix constructed by representing the j-th data sequence, DjAnd representing the degree matrix constructed by the jth data sequence, wherein the calculation mode is as follows:
Figure BDA0003272061070000057
degree matrix DjFor a diagonal matrix, the number of nodes adjacent to node i in k nodes is solved, and alpha is set to 0.001 to avoid AjWith empty rows present.
Further, the lane graph convolution GCN network in S3.2 is represented as:
Figure BDA0003272061070000058
wherein A isiAnd WiRefers to an adjacent matrix and a weight matrix corresponding to the ith lane connection mode, X refers to a node characteristic matrix, and a corresponding node characteristic XiThe ith row of the node feature matrix X represents the input features of the ith lane node, including the shape features and position features of the lane, namely:
Figure BDA0003272061070000059
wherein v isiRefers to the position of the ith lane node, namely the middle point between two end points of the lane section,
Figure BDA00032720610700000510
and
Figure BDA00032720610700000511
respectively referring to the start position coordinate and the end position coordinate of the ith lane segment.
The invention has the beneficial effects that:
(1) the invention provides a graph convolution neural network considering peripheral vehicle interaction, which solves the problem that the information interaction of peripheral vehicles is not considered in the conventional track prediction algorithm.
(2) The invention provides a method for extracting map information by means of a high-definition vector map instead of a bird's-eye view, and the vector map is used for defining the geometric shape of a lane, so that the problem of prediction discretization caused by the resolution problem is reduced.
(3) The invention provides a mode for fusing the space-time relationship between a vehicle and a driving scene, introduces a new lane characteristic to represent the generalized geometric relationship between the vehicle and the lane, and effectively improves the accuracy of the track prediction in the case of lanes of different shapes and numbers.
(4) The invention provides a multi-Seq 2Seq structure stacking mode for predicting the probability of selecting a multi-modal track and different lanes of a vehicle in the future and improving the limitation of single-track output.
Drawings
FIG. 1 is a schematic diagram of a prediction model structure according to the present invention.
Detailed Description
The invention will be further explained with reference to the drawings.
Trajectory prediction problem modeling analysis
The trajectory prediction problem can be expressed as a history based on all objectsTrajectory information to predict problems with the trajectory of the vehicle in future scenarios. Specifically, the input to the model is all observed objects at historical time thInner history track X:
Figure BDA0003272061070000061
wherein the content of the first and second substances,
Figure BDA0003272061070000062
the horizontal and vertical coordinate positions of n observation vehicles at the moment t are indicated;
further, the present invention takes into account the influence of the static environment around the vehicle on the traveling of the vehicle at the same time the map lane information within the scene, and therefore the input portion includes the map data M of the scene in addition to the history track of the vehicle:
M=[lid,lcen,lturn,lcon,linter]
wherein lidSerial number of traffic lane lcenSequence points of center line of the traffic lane lturnIndicating a lane steering condition,/conWhether there is traffic control in the designated lane, |interWhether the designated lane is an intersection or not.
Output t after model trainingh+1 to th+tfFuture time coordinate series Y:
Figure BDA0003272061070000071
the raw data needs to be preprocessed before it is input into the model. Firstly, a predicted vehicle and surrounding vehicles thereof in a traffic scene are sampled at the frequency of 10Hz, and position coordinates of sampling points of all vehicles, namely the horizontal and vertical coordinates of the vehicles are obtained. The coordinates of the predicted vehicle are set to (0, 0), and the coordinates of the vehicle around the predicted vehicle are corrected to relative coordinates with the predicted vehicle as the origin, so that the generalization and robustness of the model are enhanced. And then predicting the track information of the future 3s by using the track information of the first 2s as historical data.
(II) realizing track prediction by design model
As shown in fig. 1, a trajectory prediction model fusing surrounding vehicle interaction information according to the present invention includes: the device comprises a vehicle interactive relation extraction module, a driving scene representation module, a time-space relation fusion module and a track prediction module. The method for predicting the track by using the model comprises the following steps:
the input of the vehicle interaction relation extraction module comprises two parts, namely a predicted vehicle and historical tracks of surrounding vehicles and an interaction graph G between the vehicles.
Firstly, input is preprocessed, and history tracks are processed into n multiplied by thThree-dimensional array form of x c, where n represents n objects in the traffic scene observed in the past time step, thReferring to the history period, c-2 represents the x and y coordinates of the object.
Representing an interaction graph G between vehicles as G ═ V, E, where V represents a node of the graph, i.e. the vehicle under observation, and the feature vector on the node is the coordinates of the object at time t; e represents the interactive connecting edge between vehicles, and is represented by an adjacency matrix when the model is input. In the method, considering that connecting edges exist among vehicles on the space-time characteristic, namely, connecting edges of interaction influence among different vehicles generated due to distance influence on the space and connecting edges of each vehicle with the historical time of the vehicle on the time domain, an interaction graph G is represented as follows:
G={A0,A1}
wherein A is0Is a time-connected edge adjacency matrix, and A1Is a spatially connected edge adjacency matrix.
During data processing, all tracks of the last frame of the historical frame are taken into consideration, a 10-meter area (experience value) is defined as an influence radius r by taking a predicted vehicle as a center, a distance value l between a peripheral vehicle and the predicted vehicle is calculated, when l is less than or equal to r compared with the experience radius distance r, interaction influence is considered to exist between the vehicles, an adjacency matrix value is set to be 1, otherwise, the adjacency matrix value is set to be 0, and therefore the current view is constructedTime-measuring space matrix A1And A is0The process is an identity matrix I on the own car time domain.
After data processing, the historical track and the interactive graph G are transmitted into a vehicle interactive relation extraction module, and are mapped to a high-dimensional convolutional layer through a layer of 2D convolutional layer; and then performing space-time interaction through the two graph convolution layers. Considering the time variability of the spatial interaction, the convolution kernel of the spatial interaction is composed of the sum of two parts, namely the interaction graph G of the current observation frame and the trainable graph G which is consistent with the size of G but participates in trainingtrainAnd (4) summing.
The volume of the graph is defined as Y-LXW, wherein
Figure BDA0003272061070000081
A matrix of the characteristics of the nodes is represented,
Figure BDA0003272061070000082
a matrix of weights is represented by a matrix of weights,
Figure BDA0003272061070000083
representing the output (N represents the total number of input nodes, F represents the number of input node features, and O represents the number of output node features), a graph Laplace matrix
Figure BDA0003272061070000084
The expression of (a) is:
Figure BDA0003272061070000085
where I, A and D are the identity matrix, adjacency matrix, and degree matrix, respectively. I and a denote self-connection and connection between different nodes. All connections share the same weight W, and the degree matrix D is used to normalize the data.
Thus, to ensure that the value range of the elemental map remains unchanged after the graphics operation is performed before the map convolution operation is performed, the present invention normalizes the interaction map G using the following equation:
Figure BDA0003272061070000086
wherein, A refers to an adjacency matrix, D refers to a degree matrix, and j refers to a data sequence. A. thejRepresenting an adjacency matrix constructed by representing the j-th data sequence, DjAnd representing the degree matrix constructed by the jth data sequence, wherein the calculation mode is as follows:
Figure BDA0003272061070000087
degree matrix DjFor a diagonal matrix, the number of nodes adjacent to node i in k nodes is solved, and alpha is set to 0.001 to avoid AjWith empty rows present.
Thus, by the combination of G and GtrainExtracting space mutual information by the convolution network which is used as convolution kernel, and then fixing the time convolution layer with the convolution kernel size being (1 × 3) on the time layer to ensure that the time convolution layer is n × thThe xc dimension data processes data along the time dimension (second dimension). Output dimension preservation n x t after space layer and time layer alternation processinghAnd x c invariant data, which is subsequently used for fusing with the driving scene representation module output data.
The driving scene representation module extracts features of input map data M and learns structural map representation from vectorized map data. Firstly, a lane map is constructed according to map data M before an input module, and a lane data center line l is obtained according to the obtained lane datacen(represented as a series of two-dimensional bird's eye view coordinate points), any two connected lanes, namely left-adjacent, right-adjacent, front-section, and successor, can be acquired. Processing data into four connected dictionaries corresponding to lane ids, and respectively representing the previous section of lane L of the given lane LFront sectionRear connecting lane LSubsequent operationLeft adjacent lane LLeft adjacent toAnd the right adjacent lane LRight adjacent toThereby obtaining a lane map. Then other characteristics (including lane serial number l) in the interactive lane graph and the map data MidSequence points of lane center line lcenLane steering situation lturnWhether there is traffic control l in the laneconWhether the lane is an intersection linter) The lane maps are input into the convolutional GCN network together. The invention obtains the lane graph convolution GCN network by deforming the convolution of the conventional graph, which is expressed as follows:
Figure BDA0003272061070000091
wherein A isiAnd WiThe method refers to an adjacency matrix and a weight matrix corresponding to the ith lane connection mode (i.e., i belongs to the front section, the successor, the left neighbor and the right neighbor), and X refers to a node characteristic matrix. Corresponding node characteristic xiThe ith row of the node feature matrix X represents the input features of the ith lane node, including the shape features and position features of the lane, namely:
Figure BDA0003272061070000092
wherein v isiRefers to the position of the ith lane node, namely the middle point between two end points of the lane section,
Figure BDA0003272061070000093
and
Figure BDA0003272061070000094
respectively referring to the start position coordinate and the end position coordinate of the ith lane segment.
Considering that the prediction process has the condition that the vehicle generates a long-distance historical track section due to the fact that the speed of the vehicle is too high in a fixed historical time period, and the long-distance historical track section generally occurs in a straight lane section, the adjacent matrix can be increased to enlarge the visual field in a dilation convolution mode in the straight lane section. And then outputting lane characteristics through a layer of full connection layer with dimension of n multiplied by 128 after the convolution network.
The space-time relation fusion module mainly fuses the characteristics output by the vehicle interaction relation extraction module and the driving scene representation module, and sequentially realizes the following steps: (1) transmitting the vehicle information to a lane node, and mastering lane congestion or other use conditions; (2) lane nodeUpdating inter-information to realize real-time interconnection between lane sections; (3) and fusing and feeding back the updated map features and the real-time traffic information to the vehicle. And (3) updating information among the lane nodes in the part (2) still by adopting a lane graph convolution GCN mode, and constructing graph convolution by using an adjacent matrix with lane information to extract lane interaction information. And the mutual transmission between the vehicle information and the lane information, namely the (1) th and (3) th parts, extracts the interactive characteristics of the three types of information, namely the input lane characteristics, the vehicle characteristics and the context node information, through a spatial attention mechanism. Wherein a context node is defined as l of a lane node and a vehicle node2A path node with a distance less than a threshold, where the threshold may take an empirical value of 6 meters. The network of part (1) is arranged to: the driving scene representation module extracts n × 128 two-dimensional lane position information and n × 4-dimensional lane property characteristics (whether steering is performed, whether traffic control is performed, and whether the vehicle is an intersection) to form new map characteristic information and two-dimensional characteristic data of the vehicle, the new map characteristic information and the two-dimensional characteristic data of the vehicle serve as unit input data, after two layers of stacks made by an attention machine and one layer of full connection, lane characteristics with vehicle information are output, and the dimension is maintained to be n × 128. And (3) the network structure is consistent with that of the part (1), finally, vehicle characteristic information containing lane information and lane interaction information is extracted, and the dimension output is also kept to be n x 128.
The track prediction module takes the fused vehicle characteristic information as input, and the multi-mode prediction head outputs the final motion track prediction. For each vehicle agent, K possible future trajectories and corresponding confidence scores are predicted. Thus the prediction module has two branches, one regression branch predicting the trajectory of each pattern and the other classification branch predicting the confidence score of each pattern. For the nth participant, the Seq2Seq structure was applied in the regression branch to regress the K-sequence of BEV coordinates. The specific process is as follows: firstly, the dimension of the fused vehicle features is expanded to be nxthA xc post-input Seq2Seq architecture network, the vector representing the vehicle characteristics (in each time dimension) being fed to a respective input unit of the encoder GRU; the hidden features of the encoder GRU, together with the coordinates of the vehicle at the previous time step, are then taken togetherIs fed to a decoder GRU to predict the position coordinates of the current time step. Specifically, the input to the first decoding step is the coordinates of the vehicle in the "last historical time" step, and the output of the current step is fed to the next GRU unit. Such a decoding process is repeated several times until the model predicts the position coordinates of all expected time steps in the future.
(III) model training
The method collects real vehicle data in a continuous time period in a track prediction implementation scene as a data set for model training, and a training set, a verification set and a test set used for the model training are all taken from the data set.
The invention uses the pytorech framework to train the model, wherein the model uses an Adam optimizer to accelerate the learning speed of the model, and the learning rate of the Adam optimizer is set to 0.001, so that the training can more accurately find the global optimal point. The loss function is formed by adding lane classification errors and track regression errors, wherein the lane classification loss adopts a two-classification Hinge loss, and the track regression loss adopts a root mean square error RMSE loss. The evaluation result adopts the L2 distance FDE between the optimal predicted track end point and the ground real value and the average L2 distance ADE between the optimal predicted track and the ground real value.
And the training turns are adjusted in real time according to actual requirements and training effects, and the model parameter file is saved once after each training turn.
The above-listed series of detailed descriptions are merely specific illustrations of possible embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent means or modifications that do not depart from the technical spirit of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. An intelligent automobile track prediction system fusing peripheral vehicle interaction information is characterized by comprising a vehicle interaction relation extraction module, a driving scene representation module, a time-space relation fusion module and a track prediction module;
the vehicle interactive relation extraction module is used for constructing an interactive map representing the interactive relation between the own vehicle and the surrounding vehicles within a threshold value range according to vehicle historical track data which mainly comprises position coordinates, and inputting original track sequence coordinates and the interactive map into a GCN (general packet network) together to obtain track data representing the interactive relation map between the vehicles;
the driving scene representation module constructs an interactive relationship graph among lane sections according to the originally sensed map information, namely, the interactive relationship graph represents the front section, the subsequent section, the left adjacent lane and the right adjacent lane of the lane, and then the interactive relationship graph and the original map information are input into a lane graph convolution network together to output map data representing lane interactive relationship;
the space-time relationship fusion module fuses data output by the vehicle interactive relationship extraction module and the driving scene representation module, transmits track information representing an interactive relationship graph between vehicles to map data, and grasps lane congestion or lane use conditions; then updating the map data information fused with the track information at the moment through a lane graph convolution network to realize real-time interconnection between lane segments, and outputting map characteristic data implicitly containing vehicle information; finally, the updated real-time map features and the original track information are fed back to the vehicle, and the output information implicitly represents the historical track information with the real-time map interaction and the peripheral vehicle interaction;
the track prediction module inputs the historical track information fused by the time-space relation fusion module, two-dimensional track coordinates of the vehicle at the future moment are decoded through the processing of the encoder and the decoder, meanwhile, classification loss is set through the stacking of a plurality of codes and the decoding, a plurality of modes are output, and finally output track coordinates are expressed into a plurality of groups of future track values to express that the same vehicle corresponds to a plurality of possible future tracks.
2. The intelligent vehicle track prediction system fusing the surrounding vehicle interaction information as claimed in claim 1, wherein the vehicle interaction relationship extraction module comprises: the vehicle history track X module, the construction module of a vehicle interaction graph G and the graph convolution network GCN module are connected with the vehicle history track X module; the vehicle interactive map G construction module receives historical track information X of the vehicle and the surrounding vehicles of the vehicle historical track X module, describes the interactive relation of the vehicles on the time and space level in a form of a map matrix, and then inputs the interactive relation and the historical track X into a map convolution GCN network together to capture the complex interaction between different traffic vehicles and obtain the historical track information with interactive information.
3. The intelligent automobile track prediction system fusing the interactive information of surrounding vehicles according to claim 1, characterized in that the driving scene representation module comprises a high-definition vector map module, an interactive lane map module, a lane map convolution GCN module and a full-link FC1 module; considering the influence of driving scene information on the track of the target vehicle, wherein the driving scene information comprises lane central lines, steering information and traffic control information, the interactive lane graph module collects lane information of a high-definition vector map, the interactive lane graph and the high-definition vector map are input into the lane graph convolution GCN module together, and map feature information is extracted through the full-connection layer FC1, and the map feature information comprises the lane information.
4. The intelligent automobile track prediction system fusing the interactive information of surrounding vehicles as claimed in claim 1, wherein the spatiotemporal relationship fusion module comprises three units, the three units are used for acquiring information flow from vehicle to lane, lane to lane and lane to vehicle by constructing a stack of sequential circular fusion blocks, so as to realize the transmission of real-time traffic information, and finally outputting the track information of the vehicle to the track prediction module.
5. The intelligent vehicle track prediction system fusing the surrounding vehicle interaction information as claimed in claim 4, wherein the three units are: the first unit receives historical track information and map characteristic information, transmits real-time vehicle information to a lane node through a first Attention mechanism Attention and a full connection layer FC2, acquires the use condition of a lane, and outputs map data containing the historical track information of the vehicle; the second unit receives the output of the first unit, historical track information and map feature information, and updates lane node features by transmitting lane information through a lane graph convolution GCN layer and a full-connection layer FC 3; and the third unit receives the historical track information and the map characteristic information, and performs real-time traffic information fusion with the updated lane node characteristics of the second unit through Attention mechanism Attention and full connection layer FC 4.
6. The intelligent automobile track prediction system fusing the mutual information of surrounding vehicles as claimed in claim 1, wherein the track prediction module comprises an encoder, a decoder and an observation frame coordinate module of Seq2 Seq; firstly, an encoder receives traffic information fusion characteristics from a space-time relation fusion module as input, time dimension coding is carried out, then the coding and observation frame coordinates are input into a decoder, the decoder repeatedly decodes BEV track coordinate values of future time step lengths, confidence score of each mode is predicted by using a classification branch, and K mode tracks of a vehicle of a self-vehicle are obtained.
7. An intelligent automobile track prediction method fusing peripheral vehicle interaction information is characterized in that,
s1: firstly, preprocessing an interaction graph G between input historical tracks of a predicted vehicle and peripheral vehicles and the vehicle; processing the history track into n x thThree-dimensional array form of x c, where n represents n objects in the traffic scene observed in the past time step, thReferring to a history period, c-2 denotes x and y coordinates of the object;
representing an interaction graph G between vehicles as G ═ V, E, where V represents a node of the graph, i.e. the vehicle under observation, and the feature vector on the node is the coordinates of the object at time t; e represents the interactive connecting edge between vehicles and is represented by an adjacency matrix; wherein, considering that there are connecting edges between vehicles on the space-time characteristic, namely, the connecting edges of the interaction between different vehicles due to the distance influence on the space and the connecting edges of each vehicle and the historical time of the vehicle, the interaction graph G is expressed as an adjacency matrix:
G={A0,A1}
wherein A is0Is a time-connected edge-adjacency matrix,and A is1Is a spatially connected edge adjacency matrix;
s2: mapping the historical track and the interactive graph G to a high-dimensional convolution layer through a layer of two-dimensional convolution layer; then performing space-time interaction through the two layers of graph volume layers; the convolution kernel of space-time interaction comprises two parts, namely an interaction graph G of a current observation frame and a training graph G with the same size as Gtrain(ii) a From G and GtrainExtracting space mutual information by the convolution network which is used as convolution kernel, and then fixing the time convolution layer with the convolution kernel size being (1 × 3) on the time layer to ensure that the time convolution layer is n × thProcessing data along time dimension by x c dimension data, and after alternately processing spatial layer and time layer, outputting n x t dimension datahTrajectory data of the inter-vehicle interaction map of xc;
s3: extracting features according to the map data, and obtaining a structured map representation from the vectorized map data;
s3.1: firstly, constructing a lane map according to map data: according to the acquired lane data center line lcenThe lane data center line lcenRepresenting as a series of two-dimensional aerial view angle coordinate points, acquiring any two connected lane information, namely left adjacent, right adjacent, front section and successor, processing the two connected lane information into four connected dictionaries corresponding to lane ids, and respectively representing the previous section of lane L of the given lane LFront sectionRear connecting lane LSubsequent operationLeft adjacent lane LLeft adjacent toAnd the right adjacent lane LRight adjacent toThereby obtaining a lane map;
s3.2: then, the lane map and the features in the map data are compared, and the features comprise: lane sequence number lidSequence points of lane center line lcenLane steering situation lturnWhether there is traffic control l in the laneconWhether the lane is an intersection linterInputting the data into a lane graph convolution GCN network together, and outputting map data containing lane interaction relation;
s4: the method for fusing the trajectory data of the inter-vehicle interaction relationship diagram output in the step S2 and the map data containing the lane interaction relationship output in the step S3.2 comprises the following steps:
(1) fusing vehicle information to lane nodes to master the lane congestion condition;
(2) information fusion and updating among the lane nodes so as to realize real-time interconnection among the lane sections;
(3) fusing and feeding back the map data characteristics and the real-time traffic information to the vehicle;
the information updating among the lane nodes of the part (2) adopts a lane graph convolution GCN mode, and an adjacent matrix with lane information is used for constructing graph convolution to extract lane interaction information;
mutual transmission between vehicle information and lane information, namely the (1) th and (3) th parts, extracting interactive features of three types of information, namely input lane features, vehicle features and context node information, through a spatial attention mechanism; the context node is defined as a lane node and a vehicle node2Channel nodes with distances less than a threshold;
the network of part (1) is arranged to: forming new map characteristic information and two-dimensional characteristic data of a vehicle as unit input data by using the position information of an n multiplied by 128 two-dimensional lane and the property characteristics of the n multiplied by 4 dimensional lane, outputting the lane characteristics with vehicle information after two layers of stacks of a graph attention mechanism and one layer of full connection, and keeping the dimension n multiplied by 128; the lane property characteristics include whether to turn, whether to have traffic control, and whether to be an intersection;
the part (3) is the same as the part (1) in network setting, and finally vehicle characteristic information containing lane information and lane interaction information is output, and the dimension output is also kept to be n x 128;
s6: outputting final motion trail prediction according to the vehicle characteristic information after S5 fusion; specifically, the method comprises the following steps:
for each vehicle agent, K possible future trajectories and corresponding confidence scores are predicted, the prediction comprising two branches: one regression branch predicts the trajectory of each mode, and the other classification branch predicts the confidence score of each mode; for the nth participant, applying a Seq2Seq structure in a regression branch to regress the K sequence of BEV coordinates by the following specific process: firstly, the dimension of the fused vehicle features is expanded to be nxthAfter x c, input into a Seq2Seq structure network to show the characteristics of the vehicleThe vectors are fed to respective input units of the encoder; the hidden features of the encoder are then fed to the decoder together with the coordinates of the vehicle in the previous time step to predict the position coordinates of the current time step, in particular the input to the first decoding step is the coordinates of the vehicle in the "last historical moment" step and the output of the current step is fed to the next decoder unit, and the decoding process is repeated several times until the model predicts the position coordinates of all expected time steps in the future.
8. The method as claimed in claim 7, wherein in S2, the graph convolution is defined as Y-LXW, where Y is LXW
Figure FDA0003272061060000041
The characteristics of the nodes are represented by,
Figure FDA0003272061060000042
a matrix of weights is represented by a matrix of weights,
Figure FDA0003272061060000043
representing output, N representing total number of input nodes, F representing characteristic number of input nodes, O representing characteristic number of output nodes, and graph Laplace matrix
Figure FDA0003272061060000044
The expression of (a) is:
Figure FDA0003272061060000045
where I, A and D are the identity matrix, adjacency matrix and degree matrix, respectively, I and A denote the self-connection and the connection between different nodes, all connections share the same weight W, and the degree matrix D is used to normalize the data.
9. The intelligent vehicle trajectory prediction method fusing the interaction information of nearby vehicles according to claim 7, wherein before the graph convolution in S2, the interaction graph G is normalized:
Figure FDA0003272061060000046
wherein, A refers to an adjacency matrix, D refers to a degree matrix, and j refers to a data sequence. A. thejRepresenting an adjacency matrix constructed by representing the j-th data sequence, DjAnd representing the degree matrix constructed by the jth data sequence, wherein the calculation mode is as follows:
Figure FDA0003272061060000051
degree matrix DjFor a diagonal matrix, the number of nodes adjacent to node i in k nodes is solved, and alpha is set to 0.001 to avoid AjWith empty rows present.
10. The method for predicting the intelligent vehicle track fusing the mutual information of the surrounding vehicles according to claim 7, wherein the lane graph convolution GCN network in S3.2 is expressed as:
Figure FDA0003272061060000052
wherein A isiAnd WiRefers to an adjacent matrix and a weight matrix corresponding to the ith lane connection mode, X refers to a node characteristic matrix, and a corresponding node characteristic XiThe ith row of the node feature matrix X represents the input features of the ith lane node, including the shape features and position features of the lane, namely:
Figure FDA0003272061060000053
wherein v isiRefer to the ith vehicleThe position of the road node, i.e. the middle point between the two end points of the lane section,
Figure FDA0003272061060000054
and
Figure FDA0003272061060000055
respectively referring to the start position coordinate and the end position coordinate of the ith lane segment.
CN202111105338.XA 2021-09-22 2021-09-22 Intelligent automobile track prediction system and method integrating peripheral automobile interaction information Active CN113954864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111105338.XA CN113954864B (en) 2021-09-22 2021-09-22 Intelligent automobile track prediction system and method integrating peripheral automobile interaction information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111105338.XA CN113954864B (en) 2021-09-22 2021-09-22 Intelligent automobile track prediction system and method integrating peripheral automobile interaction information

Publications (2)

Publication Number Publication Date
CN113954864A true CN113954864A (en) 2022-01-21
CN113954864B CN113954864B (en) 2024-05-14

Family

ID=79461815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111105338.XA Active CN113954864B (en) 2021-09-22 2021-09-22 Intelligent automobile track prediction system and method integrating peripheral automobile interaction information

Country Status (1)

Country Link
CN (1) CN113954864B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627644A (en) * 2022-03-02 2022-06-14 北京航空航天大学 Intersection track prediction method based on graph convolution network and gated loop network
CN114757355A (en) * 2022-04-08 2022-07-15 中国科学技术大学 Track data set difference measurement method, system, equipment and storage medium
CN114898585A (en) * 2022-04-20 2022-08-12 清华大学 Intersection multi-view-angle-based vehicle track prediction planning method and system
CN114926823A (en) * 2022-05-07 2022-08-19 西南交通大学 WGCN-based vehicle driving behavior prediction method
CN115009275A (en) * 2022-08-08 2022-09-06 北京理工大学前沿技术研究院 Vehicle track prediction method and system in urban scene and storage medium
CN115540893A (en) * 2022-11-30 2022-12-30 广汽埃安新能源汽车股份有限公司 Vehicle path planning method and device, electronic equipment and computer readable medium
CN115909749A (en) * 2023-01-09 2023-04-04 广州通达汽车电气股份有限公司 Vehicle operation road risk early warning method, device, equipment and storage medium
CN115937801A (en) * 2023-03-08 2023-04-07 斯润天朗(北京)科技有限公司 Vehicle track prediction method and device based on graph convolution
CN116203971A (en) * 2023-05-04 2023-06-02 安徽中科星驰自动驾驶技术有限公司 Unmanned obstacle avoidance method for generating countering network collaborative prediction
CN116880462A (en) * 2023-03-17 2023-10-13 北京百度网讯科技有限公司 Automatic driving model, training method, automatic driving method and vehicle
CN117010265A (en) * 2023-04-14 2023-11-07 北京百度网讯科技有限公司 Automatic driving model capable of carrying out natural language interaction and training method thereof
CN117351712A (en) * 2023-10-11 2024-01-05 江苏大学 Zhou Che track prediction method and system based on Cro-IntntFormer and fusing vehicle driving intention
CN117516581A (en) * 2023-12-11 2024-02-06 江苏大学 End-to-end automatic driving track planning system, method and training method integrating BEVFomer and neighborhood attention transducer

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200156632A1 (en) * 2018-11-20 2020-05-21 Waymo Llc Agent prioritization for autonomous vehicles
CN111931905A (en) * 2020-07-13 2020-11-13 江苏大学 Graph convolution neural network model and vehicle track prediction method using same
KR102192348B1 (en) * 2020-02-24 2020-12-17 한국과학기술원 Electronic device for integrated trajectory prediction for unspecified number of surrounding vehicles and operating method thereof
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 Vehicle trajectory prediction method based on environment attention neural network model
CN112249008A (en) * 2020-09-30 2021-01-22 南京航空航天大学 Unmanned automobile early warning method aiming at complex dynamic environment
US20210174668A1 (en) * 2019-12-10 2021-06-10 Samsung Electronics Co., Ltd. Systems and methods for trajectory prediction
CN113291321A (en) * 2021-06-16 2021-08-24 苏州智加科技有限公司 Vehicle track prediction method, device, equipment and storage medium
CN113362491A (en) * 2021-05-31 2021-09-07 湖南大学 Vehicle track prediction and driving behavior analysis method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200156632A1 (en) * 2018-11-20 2020-05-21 Waymo Llc Agent prioritization for autonomous vehicles
US20210174668A1 (en) * 2019-12-10 2021-06-10 Samsung Electronics Co., Ltd. Systems and methods for trajectory prediction
CN112937603A (en) * 2019-12-10 2021-06-11 三星电子株式会社 System and method for predicting position of target vehicle
KR102192348B1 (en) * 2020-02-24 2020-12-17 한국과학기술원 Electronic device for integrated trajectory prediction for unspecified number of surrounding vehicles and operating method thereof
CN111931905A (en) * 2020-07-13 2020-11-13 江苏大学 Graph convolution neural network model and vehicle track prediction method using same
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 Vehicle trajectory prediction method based on environment attention neural network model
CN112249008A (en) * 2020-09-30 2021-01-22 南京航空航天大学 Unmanned automobile early warning method aiming at complex dynamic environment
CN113362491A (en) * 2021-05-31 2021-09-07 湖南大学 Vehicle track prediction and driving behavior analysis method
CN113291321A (en) * 2021-06-16 2021-08-24 苏州智加科技有限公司 Vehicle track prediction method, device, equipment and storage medium

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627644B (en) * 2022-03-02 2023-01-24 北京航空航天大学 Intersection trajectory prediction method based on graph convolution network and gated loop network
CN114627644A (en) * 2022-03-02 2022-06-14 北京航空航天大学 Intersection track prediction method based on graph convolution network and gated loop network
CN114757355A (en) * 2022-04-08 2022-07-15 中国科学技术大学 Track data set difference measurement method, system, equipment and storage medium
CN114757355B (en) * 2022-04-08 2024-04-02 中国科学技术大学 Track data set variability measurement method, system, equipment and storage medium
CN114898585A (en) * 2022-04-20 2022-08-12 清华大学 Intersection multi-view-angle-based vehicle track prediction planning method and system
CN114926823B (en) * 2022-05-07 2023-04-18 西南交通大学 WGCN-based vehicle driving behavior prediction method
CN114926823A (en) * 2022-05-07 2022-08-19 西南交通大学 WGCN-based vehicle driving behavior prediction method
CN115009275A (en) * 2022-08-08 2022-09-06 北京理工大学前沿技术研究院 Vehicle track prediction method and system in urban scene and storage medium
CN115009275B (en) * 2022-08-08 2022-12-16 北京理工大学前沿技术研究院 Vehicle track prediction method and system in urban scene and storage medium
CN115540893B (en) * 2022-11-30 2023-03-14 广汽埃安新能源汽车股份有限公司 Vehicle path planning method and device, electronic equipment and computer readable medium
CN115540893A (en) * 2022-11-30 2022-12-30 广汽埃安新能源汽车股份有限公司 Vehicle path planning method and device, electronic equipment and computer readable medium
CN115909749A (en) * 2023-01-09 2023-04-04 广州通达汽车电气股份有限公司 Vehicle operation road risk early warning method, device, equipment and storage medium
CN115937801A (en) * 2023-03-08 2023-04-07 斯润天朗(北京)科技有限公司 Vehicle track prediction method and device based on graph convolution
CN116880462A (en) * 2023-03-17 2023-10-13 北京百度网讯科技有限公司 Automatic driving model, training method, automatic driving method and vehicle
CN117010265A (en) * 2023-04-14 2023-11-07 北京百度网讯科技有限公司 Automatic driving model capable of carrying out natural language interaction and training method thereof
CN116203971A (en) * 2023-05-04 2023-06-02 安徽中科星驰自动驾驶技术有限公司 Unmanned obstacle avoidance method for generating countering network collaborative prediction
CN117351712A (en) * 2023-10-11 2024-01-05 江苏大学 Zhou Che track prediction method and system based on Cro-IntntFormer and fusing vehicle driving intention
CN117516581A (en) * 2023-12-11 2024-02-06 江苏大学 End-to-end automatic driving track planning system, method and training method integrating BEVFomer and neighborhood attention transducer

Also Published As

Publication number Publication date
CN113954864B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN113954864B (en) Intelligent automobile track prediction system and method integrating peripheral automobile interaction information
CN111931905B (en) Graph convolution neural network model and vehicle track prediction method using same
US11131993B2 (en) Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout
US11934962B2 (en) Object association for autonomous vehicles
Zyner et al. A recurrent neural network solution for predicting driver intention at unsignalized intersections
CN112215337B (en) Vehicle track prediction method based on environment attention neural network model
Li et al. Grip++: Enhanced graph-based interaction-aware trajectory prediction for autonomous driving
EP4152204A1 (en) Lane line detection method, and related apparatus
Fernando et al. Deep inverse reinforcement learning for behavior prediction in autonomous driving: Accurate forecasts of vehicle motion
GB2608567A (en) Operation of a vehicle using motion planning with machine learning
US20220153314A1 (en) Systems and methods for generating synthetic motion predictions
CN114399743B (en) Method for generating future track of obstacle
CN113705636B (en) Method and device for predicting track of automatic driving vehicle and electronic equipment
Sharma et al. Pedestrian intention prediction for autonomous vehicles: A comprehensive survey
CN115662166B (en) Automatic driving data processing method and automatic driving traffic system
CN113552883B (en) Ground unmanned vehicle autonomous driving method and system based on deep reinforcement learning
CN114882457A (en) Model training method, lane line detection method and equipment
CN113903173B (en) Vehicle track feature extraction method based on directed graph structure and LSTM
CN114283576A (en) Vehicle intention prediction method and related device
Bharilya et al. Machine learning for autonomous vehicle's trajectory prediction: A comprehensive survey, challenges, and future research directions
Hu et al. Learning dynamic graph for overtaking strategy in autonomous driving
CN114620059A (en) Automatic driving method and system thereof, and computer readable storage medium
CN115937801A (en) Vehicle track prediction method and device based on graph convolution
CN114516336B (en) Vehicle track prediction method considering road constraint conditions
Wang et al. Vehicle trajectory prediction based on attention mechanism and GAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant