CN115937801A - Vehicle track prediction method and device based on graph convolution - Google Patents
Vehicle track prediction method and device based on graph convolution Download PDFInfo
- Publication number
- CN115937801A CN115937801A CN202310212843.7A CN202310212843A CN115937801A CN 115937801 A CN115937801 A CN 115937801A CN 202310212843 A CN202310212843 A CN 202310212843A CN 115937801 A CN115937801 A CN 115937801A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- lane
- data
- map
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000004927 fusion Effects 0.000 claims abstract description 55
- 230000003993 interaction Effects 0.000 claims abstract description 35
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000000605 extraction Methods 0.000 claims description 13
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 230000002123 temporal effect Effects 0.000 claims description 7
- 238000011144 upstream manufacturing Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 230000001902 propagating effect Effects 0.000 claims description 4
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 208000025174 PANDAS Diseases 0.000 description 1
- 208000021155 Paediatric autoimmune neuropsychiatric disorders associated with streptococcal infection Diseases 0.000 description 1
- 240000000220 Panda oleosa Species 0.000 description 1
- 235000016496 Panda oleosa Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Traffic Control Systems (AREA)
Abstract
The invention relates to a vehicle track prediction method and a vehicle track prediction device based on graph convolution, wherein the method comprises the steps of obtaining vehicle track data and preprocessing the vehicle track data to obtain data to be processed; constructing a vehicle track data map and a lane data map according to the data to be processed; and inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result. The method combines the vehicle track characteristic and lane characteristic interaction information to carry out track prediction, models the vehicle and lane interaction information respectively by using a graph convolution system, then fuses the vehicle information and the lane information to create proper embedding, and finally operates the embedding to predict the track of the vehicle so as to increase the accuracy of a track prediction result.
Description
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to a vehicle track prediction method and device based on graph convolution.
Background
If the automatic driving automobile can predict the future driving tracks of the surrounding vehicles by sensing the surrounding environment, the vehicles can reasonably plan the driving routes according to the situations, so that risks are avoided in advance, the possibility of accidents is reduced, and safer and more comfortable driving experience is brought to human beings. For a long time, the behavior of vehicles has been fraught with uncertainty due to the complex traffic situation in real-world scenes and the different driving styles of drivers. Meanwhile, the trajectory of the vehicle is difficult to predict due to complex interaction between the vehicle and the environment. In recent years, predecessors have made many pioneering efforts in vehicle trajectory prediction. A vehicle track prediction model is established mainly by three methods:
the first model is based on physical motion, and predicts the short-term trajectory of the vehicle by inputting relevant vehicle control parameters (such as steering and acceleration), vehicle own constant parameters (such as body mass), and current state quantities of the vehicle (such as position and speed) based on the physical model, but modeling based on the physical model often requires a large number of parameters, and the generalization of the model is not strong.
The second model is based on a machine learning method, and as the machine learning develops, a plurality of maneuvering prediction methods based on the machine learning are proposed in the field of trajectory prediction, including a multi-level perceptron (MLP), logistic regression, a Relevance Vector Machine (RVM) and a support vector machine. In addition, a prediction method for representing the motion trail by using a hidden Markov model is also provided, and the machine learning method mainly comprises a Bayesian network, a Kalman filter and the like. Because the motion track of the vehicle can be regarded as a time sequence prediction problem, the recurrent neural network and the variant thereof which have good performance on the time sequence prediction problems such as speech recognition, machine translation and the like are also used for predicting the track. The machine learning method has the advantages that a large number of data sets can be used for training the model, and the generalization capability of the model is enhanced, so that the applicability is stronger.
The third model is based on the graph convolution method, and the graph neural network is a neural network directly operated on the graph structure and well performs in the analysis related to the graph network. The existing map neural network vehicle trajectory prediction model mainly adopts the convolution of space-time maps, and the space-time map neural networks (STGNNs) play an important role in capturing the dynamics of maps. The advantage of the graph neural learning method is that it can train the model with fewer parameters, reducing the running and training time of the model. And the interaction between vehicles can be correctly simulated by using a topological graph method, so that the prediction precision is improved.
However, although the existing model based on the graph convolution method can correctly simulate the interaction between the vehicle and the vehicle, and make up for some of the disadvantages of the previous algorithm to a certain extent, the interaction between the vehicle and the environment cannot be simulated, so that the trajectory cannot be correctly predicted, and the prediction accuracy is low.
Disclosure of Invention
In view of the above, the present invention provides a vehicle trajectory prediction method and apparatus based on graph convolution to solve the problem that the trajectory cannot be predicted correctly due to the fact that a model based on the graph convolution method in the prior art cannot simulate the interaction between a vehicle and an environment.
In order to realize the purpose, the invention adopts the following technical scheme: a vehicle track prediction method based on graph convolution comprises the following steps:
acquiring vehicle track data and preprocessing the vehicle track data to obtain data to be processed;
constructing a vehicle track data map and a lane data map according to the data to be processed;
inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result;
the method for constructing the vehicle track data map according to the data to be processed comprises the following steps:
determining each vehicle as a node to obtain a node set;
acquiring a set of sides of vehicles with interaction to obtain a side set; the edge set comprises a space edge and an inter-frame edge, wherein the space edge is used for representing interaction information between two vehicles at the time t, the inter-frame edge represents connection between the vehicles at different time, and historical information of the vehicles is described frame by frame;
obtaining an undirected graph according to the node set and the edge set, wherein the undirected graph is used for representing interaction between vehicles, and determining a vehicle trajectory data graph according to the undirected graph;
constructing a lane data map according to the data to be processed, comprising:
defining a line segment consisting of two continuous points on the central line of the lane as a lane node; the lane nodes comprise an upstream node, a downstream node, a left adjacent node and a right adjacent node;
acquiring a map aerial view through map data;
and obtaining a lane data map by the result of the map aerial view and the lane nodes.
Further, the vehicle trajectory prediction model includes:
the characteristic extraction module is used for extracting the characteristics of the vehicle track and the lane according to the vehicle track data diagram and the lane data diagram respectively;
the multi-feature fusion module is used for carrying out information fusion on the extracted vehicle track features and lane features to obtain fusion features;
and the vehicle track prediction module is used for predicting the future vehicle track according to the fusion characteristics.
Furthermore, the feature extraction module is formed by cascading two spatio-temporal map rolling blocks;
the space-time graph convolution module comprises: a normal convolutional layer, a spatial convolutional layer, and a temporal convolutional layer; the common convolutional layer is used for increasing the number of channels and mapping two-dimensional input data to a high-dimensional space, so that a vehicle track prediction model is subjected to learning training in a track prediction task; the space convolution layer is used for processing the interaction between the vehicles in the space; the time convolution layer is used to capture useful temporal features.
Further, the common convolutional layer is a 2D convolutional layer of 1 × 1 convolutional kernel;
the space convolution layer is composed of a fixed graph based on current input and a trainable graph with the same shape as the fixed graph;
the time convolution layer is arranged behind the space convolution layer;
in the space-time convolution module, a time convolution layer is added behind each space convolution layer, and the input data is processed alternately in space and time.
Further, the information fusion of the extracted vehicle track characteristic and the extracted lane characteristic includes:
the required interactive fusion characteristics between the vehicle and the lane nodes comprise 4 types, namely a vehicle-to-lane fusion module, a lane-to-vehicle fusion module and a vehicle-to-vehicle fusion module, and a stack consisting of the four fusion modules is constructed;
the vehicle-to-lane fusion module introduces real-time traffic information into lane nodes;
the lane-to-lane module updates lane node characteristics by propagating traffic information on a lane graph;
the lane-to-vehicle fusion module fuses the updated map function and the real-time traffic information into a vehicle;
the vehicle-to-vehicle fusion module processes interactions between vehicles and generates output vehicle characteristics.
Further, the predicting the future vehicle trajectory according to the fusion features includes:
the vehicle track prediction module takes the fusion characteristics as input, and predicts K possible future tracks and corresponding confidence coefficients for each vehicle;
the trajectory prediction module comprises two branches, wherein one branch is a regression branch for predicting the trajectory of each mode, and the other branch is a classification branch for predicting the confidence coefficient of each mode;
for the mth vehicle, applying a residual block and a linear layer in the regression branch to obtain a coordinate track of the K-type mode map aerial view;
and outputting the final track prediction.
Further, calculating the coordinate track of the K-th mode map aerial view by applying the residual block and the linear layer in the regression branch for the mth vehicle,
The embodiment of the application provides a vehicle track prediction device based on graph convolution, including:
the acquisition module is used for acquiring vehicle track data and preprocessing the vehicle track data to obtain data to be processed;
the construction module is used for constructing a vehicle track data graph and a lane data graph according to the data to be processed;
and the prediction module is used for inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result.
By adopting the technical scheme, the invention can achieve the following beneficial effects:
the invention provides a vehicle track prediction method and a vehicle track prediction device based on graph convolution. Then, the embedding is operated to predict the trajectory of the vehicle, increasing the accuracy of the trajectory prediction result.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating the steps of a vehicle trajectory prediction method based on graph convolution according to the present invention;
FIG. 2 is a schematic flow chart of the method for constructing a lane data map according to the present invention;
FIG. 3 is a schematic diagram illustrating a prediction process of a vehicle trajectory prediction model provided by the present invention;
fig. 4 is a schematic structural diagram of a vehicle trajectory prediction device based on graph convolution according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It should be apparent that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A specific vehicle trajectory prediction method and apparatus based on graph convolution according to the embodiments of the present application are described below with reference to the accompanying drawings.
As shown in fig. 1, a vehicle trajectory prediction method based on graph convolution provided in an embodiment of the present application includes:
s101, obtaining vehicle track data and preprocessing the vehicle track data to obtain data to be processed;
the vehicle track data is preprocessed by using a python language and a pandas library and a numpy library. According to the selected vehicle, selecting a data block meeting the following conditions: the sampling is performed with a frame ID length of 20 and a step size of 1.
S102, constructing a vehicle track data graph and a lane data graph according to the data to be processed;
in some embodiments, constructing a vehicle trajectory data map from the to-be-processed data comprises:
determining each vehicle as a node to obtain a node set;
acquiring a set of sides of vehicles with interaction to obtain a side set; the edge set comprises a space edge and an inter-frame edge, wherein the space edge is used for representing interaction information between two vehicles at the time t, the inter-frame edge represents connection between the vehicles at different time, and historical information of the vehicles is described frame by frame;
and obtaining an undirected graph according to the node set and the edge set, wherein the undirected graph is used for representing interaction between vehicles, and determining a vehicle trajectory data graph according to the undirected graph.
In particular, in real life, the operation of the vehicle is affected by the surrounding environment and the vehicle. Therefore, to express the interaction between such vehicles, we use an undirected graph G = { V, E } to represent.
In this figure, V is a node set that contains a plurality of nodes, each node representing a vehicle.
E is the set of edges, which is the set of edges to which interacting vehicles are connected. In an automatic driving application scenario, when two vehicles move within a certain range, the two vehicles will affect each other, and interaction occurs. The edge set E consists of two parts;
space edge: interaction information between two vehicles at time t is described.
Inter-frame edge: the connection between vehicles at different times describes the historical information of the vehicles frame by frame. These edges are marked as,All edges in (a) represent historical information of their motion.
In some embodiments, constructing a lane data map from the to-be-processed data includes:
defining a line segment consisting of two continuous points on the central line of the lane as a lane node; the lane nodes comprise an upstream node, a downstream node, a left adjacent node and a right adjacent node;
acquiring a map aerial view through map data;
and obtaining a lane data map by the result of the map aerial view and the lane nodes.
Specifically, as shown in fig. 2, since the amount of high-precision map information is enormous, and the undirected graph representation is too complicated, it is necessary to adopt a simple map representation form: vectorizing the map data. Firstly, a line segment formed by two continuous points on a lane central line is defined as a lane node, and then four connection types among the four lane nodes are defined:
an upstream node: the node is an upstream node of the current node under the same lane.
A downstream node: the node is a downstream node of the current node under the same lane.
Left adjacent node: the left lane node that can be reached directly without violating traffic regulations.
And (3) right neighbor node: the right lane node can be reached directly without violating traffic regulations.
Vehicles often plan routes based on connectivity of lane centerlines, and this simple map format provides basic geometric and semantic information for motion prediction.
Finally, a lane map is derived from the map data as inputAnd (2) indicating lane nodes, wherein N is the number of the lane nodes, and the ith row of V is the BEV coordinate of the ith node (BEV refers to a map bird's eye view).By four us representing their connection type @ with 4 adjacency matrices>。
S103, inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result.
Wherein the vehicle trajectory prediction model comprises:
the characteristic extraction module is used for extracting the characteristics of the vehicle track and the lane according to the vehicle track data graph and the lane data graph respectively;
the multi-feature fusion module is used for carrying out information fusion on the extracted vehicle track features and lane features to obtain fusion features;
and the vehicle track prediction module is used for predicting the future vehicle track according to the fusion characteristics.
Specifically, as shown in fig. 3, the vehicle trajectory prediction model first performs feature extraction on the vehicle trajectory and the lane by using a feature extraction module, then transmits the extracted vehicle features and lane features to a multi-feature fusion module, fuses the obtained information, and transmits the fused information to a vehicle trajectory prediction module to predict the future trajectory.
In some embodiments, the feature extraction module is composed of two spatiotemporal map volume blocks in cascade;
the space-time graph convolution module comprises: a normal convolutional layer, a spatial convolutional layer, and a temporal convolutional layer; the common convolutional layer is used for increasing the number of channels and mapping two-dimensional input data to a high-dimensional space, so that a vehicle track prediction model is subjected to learning training in a track prediction task; the space convolution layer is used for processing the interaction between the vehicles in the space; the time convolution layer is used to capture useful temporal features.
It is understood that the feature extraction module is divided into vehicle feature extraction and lane information extraction.
The vehicle characteristic extraction module is mainly formed by cascading two spatio-temporal map volume blocks.
Each space-time convolution module mainly comprises three parts:
a common convolutional layer: one with a 2D convolutional layer using a (1 x 1) convolutional kernel. The method has the main effect of increasing the number of channels, can map two-dimensional input data (x and y coordinates) into a high-dimensional space, and helps a model to better learn and train in a track prediction task. Its output has) Where n is the number of vehicle nodes, is>Is the time series and C is the new number of channels.
A space convolution layer: for dealing with the interaction between vehicles in the space. A spatial convolution layer consists of two graphs, which are (i) a fixed graph based on the current input and (ii) a trainable graph that is the same shape as the fixed graph.
Time convolution layer: temporal convolution is used to capture useful temporal features. In the space-time convolution module, a time convolution layer is added behind each space convolution layer, and the input data is processed alternately in space and time.
Given a preprocessed input data (input representation) with G = { V, E } as inputThe input is to two space-time graph convolutions, each using a jump connection, to ensure that the model can propagate larger gradients to the initial layers, which can also be learned as fast as the final layers, and the vehicle features are finally output. />
It should be noted that, lane feature extraction is a method formed by stacking 4 multi-scale residual blocks, and each multi-scale residual block is formed by two parts: one of which is LaneConv; and another linear layer with residual concatenation. All layers have 128 functional channels.
In some embodiments, the information fusion of the extracted vehicle track feature and the lane feature includes:
the required interactive fusion characteristics between the vehicle and the lane node comprise 4 types, namely a vehicle-to-lane fusion module, a lane-to-vehicle fusion module and a vehicle-to-vehicle fusion module, and a stack consisting of the four fusion modules is constructed;
the vehicle-to-lane fusion module introduces real-time traffic information into lane nodes;
the lane-to-lane module updates lane node characteristics by propagating traffic information on a lane graph;
the lane-to-vehicle fusion module fuses the updated map function and the real-time traffic information into a vehicle;
the vehicle-to-vehicle fusion module processes interactions between vehicles and generates output vehicle characteristics.
Specifically, all the interaction information between the vehicle and the lane nodes in the application mainly has 4 types: vehicle-to-lane (V2L), lane-to-lane (L2L), lane-to-vehicle (L2V), and vehicle-to-vehicle (V2V). To capture all information features, we build a stack of four fusion modules, and intuitively say, V2L introduces real-time traffic information into lane nodes, such as lane congestion or usage. The L2L updates the lane node feature by propagating traffic information on the lane map. L2V merges updated map functions with real-time traffic information into the vehicle. A2A processes interactions between vehicles and generates output vehicle characteristics, which are then used by a prediction module for motion prediction.
Given a vehicle node i, the characteristics of its upstream and downstream nodes j are aggregated as follows, taking V2L as an example
Wherein,is characteristic of the ith node>Is a weight matrix, is based on>For the combination of layer normalization and ReLU,where v is the node position.
In some embodiments, the predicting of the future vehicle trajectory based on the fused features comprises:
the vehicle track prediction module takes the fusion characteristics as input, and predicts K possible future tracks and corresponding confidence coefficients for each vehicle;
the trajectory prediction module comprises two branches, wherein one branch is a regression branch for predicting the trajectory of each mode, and the other branch is a classification branch for predicting the confidence coefficient of each mode;
for the mth vehicle, applying a residual block and a linear layer in the regression branch to obtain a coordinate track of the K-type mode map aerial view;
and outputting the final track prediction.
Specifically, the multi-mode prediction head outputs the final trajectory prediction with the fused features as input. For each vehicle, it predicts K possible future trajectories and their confidence. Two branches are involved, one is the regression branch that predicts the trajectory for each mode, and one is the classification branch that predicts the confidence for each mode. For the m-th vehicle, we apply the residual block and the linear layer in the regression branch to obtain the BEV coordinate trace of the K-th mode:
wherein,is the BEV predicted coordinate of the kth mode at the ith time. In the classification branch, MLP (multi-layer perceptron) is adopted to obtain a distance value of a K sequence, the distance value is connected with the vehicle characteristic, and a residual block and a linear layer are adopted to obtain the confidence coefficient of the K mode.
The working principle of the vehicle track prediction method based on graph convolution is as follows: acquiring vehicle track data and preprocessing the vehicle track data to obtain data to be processed; constructing a vehicle track data map and a lane data map according to the data to be processed; and inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result.
According to the method, the automatic driving automobile is used as an object, aiming at the problems that the parameter efficiency is low and expensive in training of the current recursive framework, and the interaction between the automobile and the automobile is not directly established by an aggregation layer, the vehicle and lane interaction information is respectively established by using an image convolution system, and finally, the vehicle information and the lane information are fused, so that the proper embedding is established. The embedding is then operated to predict the trajectory of the vehicle, increasing the accuracy of the trajectory prediction results.
As shown in fig. 4, an embodiment of the present application provides a vehicle trajectory prediction apparatus based on graph convolution, including:
the acquiring module 201 is configured to acquire vehicle trajectory data and perform preprocessing to obtain data to be processed;
the construction module 202 is configured to construct a vehicle trajectory data map and a lane data map according to the data to be processed;
and the prediction module 203 is used for inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result.
The working principle of the vehicle track prediction device based on graph convolution is that the acquisition module 201 acquires vehicle track data and performs preprocessing to obtain data to be processed; the construction module 202 constructs a vehicle track data map and a lane data map according to the data to be processed; the prediction module 203 inputs the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result.
In summary, the present invention provides a vehicle trajectory prediction method and apparatus based on graph convolution, where the method includes obtaining vehicle trajectory data and performing preprocessing to obtain data to be processed; constructing a vehicle track data map and a lane data map according to the data to be processed; and inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result. The method combines the vehicle track characteristic and lane characteristic interaction information to carry out track prediction, models the vehicle and lane interaction information respectively by using a graph convolution system, then fuses the vehicle information and the lane information to create proper embedding, and finally operates the embedding to predict the track of the vehicle so as to increase the accuracy of a track prediction result.
It is to be understood that the embodiments of the method provided above correspond to the embodiments of the apparatus described above, and the corresponding specific contents may be referred to each other, which is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (8)
1. A vehicle track prediction method based on graph convolution is characterized by comprising the following steps:
acquiring vehicle track data and preprocessing the vehicle track data to obtain data to be processed;
constructing a vehicle track data map and a lane data map according to the data to be processed;
inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result;
the method for constructing the vehicle track data map according to the data to be processed comprises the following steps:
determining each vehicle as a node to obtain a node set;
acquiring a set of sides of vehicles with interaction to obtain a side set; the edge set comprises a space edge and an inter-frame edge, wherein the space edge is used for representing interaction information between two vehicles at the moment t, the inter-frame edge represents connection between the vehicles at different time, and historical information of the vehicles is described frame by frame;
obtaining an undirected graph according to the node set and the edge set, wherein the undirected graph is used for representing interaction among vehicles, and determining a vehicle track data graph according to the undirected graph;
constructing a lane data map according to the data to be processed, comprising:
defining a line segment formed by two continuous points on the center line of the lane as a lane node; the lane nodes comprise an upstream node, a downstream node, a left adjacent node and a right adjacent node;
acquiring a map aerial view through map data;
and obtaining a lane data map by the result of the map aerial view and the lane nodes.
2. The method of claim 1, wherein the vehicle trajectory prediction model comprises:
the characteristic extraction module is used for extracting the characteristics of the vehicle track and the lane according to the vehicle track data graph and the lane data graph respectively;
the multi-feature fusion module is used for carrying out information fusion on the extracted vehicle track features and lane features to obtain fusion features;
and the vehicle track prediction module is used for predicting the future vehicle track according to the fusion characteristics.
3. The method of claim 2, wherein the feature extraction module is comprised of two spatio-temporal map volume blocks cascaded;
the space-time graph convolution module comprises: a normal convolutional layer, a spatial convolutional layer, and a temporal convolutional layer; the common convolutional layer is used for increasing the number of channels and mapping two-dimensional input data to a high-dimensional space, so that a vehicle track prediction model is subjected to learning training in a track prediction task; the space convolution layer is used for processing the interaction between the vehicles in the space; the time convolution layer is used to capture useful time features.
4. The method of claim 3,
the common convolutional layer is a 2D convolutional layer of 1 multiplied by 1 convolutional kernel;
the space convolution layer is composed of a fixed graph based on current input and a trainable graph with the same shape as the fixed graph;
the time convolution layer is arranged behind the space convolution layer;
in the space-time convolution module, a time convolution layer is added behind each space convolution layer, and the input data is processed alternately in space and time.
5. The method according to claim 2, wherein the information fusion of the extracted vehicle track features and lane features comprises:
the required interactive fusion characteristics between the vehicle and the lane nodes comprise 4 types, namely a vehicle-to-lane fusion module, a lane-to-vehicle fusion module and a vehicle-to-vehicle fusion module, and a stack consisting of the four fusion modules is constructed;
the vehicle-to-lane fusion module introduces real-time traffic information into lane nodes;
the lane-to-lane module updates lane node characteristics by propagating traffic information on a lane graph;
the lane-to-vehicle fusion module fuses the updated map function and the real-time traffic information into a vehicle;
the vehicle-to-vehicle fusion module processes interactions between vehicles and generates output vehicle characteristics.
6. The method of claim 2, wherein the predicting of the future vehicle trajectory from the fused features comprises:
the vehicle track prediction module takes the fusion characteristics as input, and predicts K possible future tracks and corresponding confidence coefficients for each vehicle;
the trajectory prediction module comprises two branches, wherein one branch is a regression branch for predicting the trajectory of each mode, and the other branch is a classification branch for predicting the confidence coefficient of each mode;
for the mth vehicle, applying a residual block and a linear layer in the regression branch to obtain a coordinate track of the K-type mode map aerial view;
and outputting the final track prediction.
8. A vehicle trajectory prediction device based on graph convolution, characterized by comprising:
the acquisition module is used for acquiring vehicle track data and preprocessing the vehicle track data to obtain data to be processed;
the construction module is used for constructing a vehicle track data graph and a lane data graph according to the data to be processed;
the prediction module is used for inputting the vehicle track data map and the lane data map into a pre-constructed vehicle track prediction model for feature fusion to obtain a track prediction result;
the method for constructing the vehicle track data map according to the data to be processed comprises the following steps:
determining each vehicle as a node to obtain a node set;
acquiring a set of sides of vehicles with interaction to obtain a side set; the edge set comprises a space edge and an inter-frame edge, wherein the space edge is used for representing interaction information between two vehicles at the time t, the inter-frame edge represents connection between the vehicles at different time, and historical information of the vehicles is described frame by frame;
obtaining an undirected graph according to the node set and the edge set, wherein the undirected graph is used for representing interaction between vehicles, and determining a vehicle trajectory data graph according to the undirected graph;
constructing a lane data map according to the data to be processed, comprising:
defining a line segment consisting of two continuous points on the central line of the lane as a lane node; the lane nodes comprise an upstream node, a downstream node, a left adjacent node and a right adjacent node;
acquiring a map aerial view through map data;
and obtaining a lane data map by the result of the map aerial view and the lane nodes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310212843.7A CN115937801A (en) | 2023-03-08 | 2023-03-08 | Vehicle track prediction method and device based on graph convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310212843.7A CN115937801A (en) | 2023-03-08 | 2023-03-08 | Vehicle track prediction method and device based on graph convolution |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115937801A true CN115937801A (en) | 2023-04-07 |
Family
ID=86649369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310212843.7A Pending CN115937801A (en) | 2023-03-08 | 2023-03-08 | Vehicle track prediction method and device based on graph convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115937801A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117688823A (en) * | 2024-02-04 | 2024-03-12 | 北京航空航天大学 | Rock-soil particle track prediction method, electronic equipment and medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113705636A (en) * | 2021-08-12 | 2021-11-26 | 重庆邮电大学 | Method and device for predicting trajectory of automatic driving vehicle and electronic equipment |
CN113954864A (en) * | 2021-09-22 | 2022-01-21 | 江苏大学 | Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information |
CN114692762A (en) * | 2022-04-02 | 2022-07-01 | 重庆邮电大学 | Vehicle track prediction method based on graph attention interaction mechanism |
-
2023
- 2023-03-08 CN CN202310212843.7A patent/CN115937801A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113705636A (en) * | 2021-08-12 | 2021-11-26 | 重庆邮电大学 | Method and device for predicting trajectory of automatic driving vehicle and electronic equipment |
CN113954864A (en) * | 2021-09-22 | 2022-01-21 | 江苏大学 | Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information |
CN114692762A (en) * | 2022-04-02 | 2022-07-01 | 重庆邮电大学 | Vehicle track prediction method based on graph attention interaction mechanism |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117688823A (en) * | 2024-02-04 | 2024-03-12 | 北京航空航天大学 | Rock-soil particle track prediction method, electronic equipment and medium |
CN117688823B (en) * | 2024-02-04 | 2024-05-14 | 北京航空航天大学 | Rock-soil particle track prediction method, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Attention-based hierarchical deep reinforcement learning for lane change behaviors in autonomous driving | |
Xin et al. | Intention-aware long horizon trajectory prediction of surrounding vehicles using dual LSTM networks | |
CN111860155B (en) | Lane line detection method and related equipment | |
CN110796856B (en) | Vehicle lane change intention prediction method and training method of lane change intention prediction network | |
Lee et al. | Convolution neural network-based lane change intention prediction of surrounding vehicles for ACC | |
US12037027B2 (en) | Systems and methods for generating synthetic motion predictions | |
Chen et al. | Driving maneuvers prediction based autonomous driving control by deep Monte Carlo tree search | |
Akan et al. | Stretchbev: Stretching future instance prediction spatially and temporally | |
Sharma et al. | Pedestrian intention prediction for autonomous vehicles: A comprehensive survey | |
CN113362491B (en) | Vehicle track prediction and driving behavior analysis method | |
CN113989330A (en) | Vehicle track prediction method and device, electronic equipment and readable storage medium | |
Jeon et al. | Traffic scene prediction via deep learning: Introduction of multi-channel occupancy grid map as a scene representation | |
Bharilya et al. | Machine learning for autonomous vehicle's trajectory prediction: A comprehensive survey, challenges, and future research directions | |
CN110281949A (en) | A kind of automatic Pilot unifies hierarchical decision making method | |
Mukherjee et al. | Interacting vehicle trajectory prediction with convolutional recurrent neural networks | |
WO2023242223A1 (en) | Motion prediction for mobile agents | |
CN115937801A (en) | Vehicle track prediction method and device based on graph convolution | |
Zhong et al. | Behavior prediction for unmanned driving based on dual fusions of feature and decision | |
Geng et al. | Dynamic-learning spatial-temporal Transformer network for vehicular trajectory prediction at urban intersections | |
Selvaraj et al. | Edge learning of vehicular trajectories at regulated intersections | |
CN118171723A (en) | Method, device, equipment, storage medium and program product for deploying intelligent driving strategy | |
US20220269948A1 (en) | Training of a convolutional neural network | |
Arbabi et al. | Planning for autonomous driving via interaction-aware probabilistic action policies | |
WO2023187121A1 (en) | Simulation-based testing for robotic systems | |
WO2023021208A1 (en) | Support tools for av testing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230407 |