WO2023221348A1 - Procédé et système de prédiction de trajectoire de véhicule, dispositif informatique et support de stockage - Google Patents

Procédé et système de prédiction de trajectoire de véhicule, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2023221348A1
WO2023221348A1 PCT/CN2022/119688 CN2022119688W WO2023221348A1 WO 2023221348 A1 WO2023221348 A1 WO 2023221348A1 CN 2022119688 W CN2022119688 W CN 2022119688W WO 2023221348 A1 WO2023221348 A1 WO 2023221348A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
predicted
data
hidden state
trajectory
Prior art date
Application number
PCT/CN2022/119688
Other languages
English (en)
Chinese (zh)
Inventor
刘占文
李超
员惠莹
王洋
樊星
李文倩
杨楠
靳引利
赵彬岩
范颂华
范锦
程娟茹
薛志彪
肖方伟
Original Assignee
长安大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长安大学 filed Critical 长安大学
Publication of WO2023221348A1 publication Critical patent/WO2023221348A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Definitions

  • the invention belongs to the field of automatic driving and relates to a vehicle trajectory prediction method, system, computer equipment and storage medium.
  • trajectory prediction methods based on driving strategy classification have become a research direction.
  • This type of method first predicts the future driving strategy of the vehicle, such as going straight, changing lanes left, changing lanes right, etc., and then performs microscopic trajectories based on the driving strategy. Prediction.
  • this type of method solves some of the lane change trajectory prediction problems, it also makes the trajectory prediction accuracy completely dependent on the prediction accuracy of the driving strategy.
  • the model predicts the driving strategy incorrectly, the trajectory prediction accuracy drops significantly.
  • the long-term prediction ability of the LSTM-based model is poor, and its prediction error increases sharply as the prediction time increases.
  • the Transformer model which has strong long-term prediction capabilities, its model parameters and calculations are huge, making the model too complex.
  • the purpose of the present invention is to overcome the shortcomings of low vehicle trajectory prediction accuracy in the above-mentioned prior art and provide a vehicle trajectory prediction method, system, computer equipment and storage medium.
  • a vehicle trajectory prediction method includes:
  • the preset multi-dimensional dynamic scene feature extraction function Through the preset multi-dimensional dynamic scene feature extraction function, the historical movement trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted are extracted, and the multi-dimensional dynamic scene feature vector of the vehicle to be predicted is obtained;
  • the traffic perception information and historical motion state data of the vehicle to be predicted are encoded to obtain the hidden state information of the vehicle to be predicted;
  • the hybrid attention matrix of the vehicle to be predicted is obtained, and the weight is assigned to the hidden state information of the vehicle to be predicted through the hybrid attention matrix of the vehicle to be predicted, and then through maximum pooling and full connection in sequence Process to obtain the trajectory prediction value of the vehicle to be predicted.
  • each of the adjacent vehicles of the vehicle to be predicted includes adjacent vehicles in eight directions: front, rear, left, right, front left, rear left, front right and rear right of the vehicle to be predicted.
  • extracting the historical motion trajectory data of the vehicle to be predicted and each adjacent vehicle of the vehicle to be predicted through a preset multi-dimensional dynamic scene feature extraction function, and obtaining the multi-dimensional dynamic scene feature vector of the vehicle to be predicted includes:
  • the preset multi-dimensional dynamic scene feature extraction function extract the historical movement trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted, the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted in the driving direction and the vertical direction of the driving direction. Based on the position data, speed data and acceleration data, the multi-dimensional dynamic scene feature vector of the vehicle to be predicted is obtained.
  • the information extraction neural network includes a first convolution layer, a second convolution layer, a maximum pooling layer and a fully connected layer connected in sequence;
  • the first convolution layer is used to fuse the position data, speed data and acceleration data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted in the driving direction and the vertical direction of the driving direction, to obtain the vehicle to be predicted and the vehicle to be predicted.
  • the second convolutional layer is used to fuse the first fusion features of the vehicle to be predicted and any adjacent vehicles among the adjacent vehicles of the vehicle to be predicted, to obtain the second fusion feature;
  • the maximum pooling layer is used for maximum pooling to process the second fusion feature
  • the fully connected layer is used to fully connect the second fusion feature after the maximum pooling process to obtain the traffic perception information of the vehicle to be predicted.
  • the preset temporal feature encoder is a long short-term memory network encoder.
  • obtaining the hybrid attention matrix of the vehicle to be predicted based on the hidden state information of the vehicle to be predicted includes:
  • the hybrid attention matrix ⁇ of the vehicle to be predicted is obtained by the following formula:
  • softmax is the normalized exponential function
  • ⁇ t is the time weight vector
  • ⁇ f is the feature weight vector
  • g t (H, h t+1 ) Hh T
  • g f (H, h t+1 ) h t+ 1(W f H)
  • g f is the feature weight cosine correlation function
  • g t is the time weight cosine correlation function
  • W f is the feature matrix of the vehicle to be predicted in the historical T h frame
  • H is the hidden state information of the vehicle to be predicted in the historical T h frame
  • h t is the hidden state data of the vehicle to be predicted at time t
  • h t+1 is the hidden state data of the vehicle to be predicted at time t+1.
  • assigning weights to the hidden state information of the vehicle to be predicted through the hybrid attention matrix of the vehicle to be predicted, and then sequentially performing maximum pooling processing and full connection processing to obtain the trajectory prediction value of the vehicle to be predicted includes:
  • the trajectory prediction value y t+1 of the vehicle to be predicted is obtained through the following formula:
  • is the multiplication of the corresponding elements of the matrix
  • o t is the most beneficial moment to improve the prediction accuracy
  • O i,j is the hidden state information of the vehicle to be predicted after assigning weights
  • h′ t+1 is the hidden state information of the vehicle to be predicted at time t+1 obtained by fully connecting o t , of f and h t+1 , and contact is fully connected processing.
  • W 1 and W 2 are preset weights.
  • a vehicle trajectory prediction system includes:
  • the data acquisition module is used to obtain the historical movement trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted;
  • the data preprocessing module is used to extract the historical motion trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted through the preset multi-dimensional dynamic scene feature extraction function, and obtain the multi-dimensional dynamic scene feature vector of the vehicle to be predicted;
  • the information extraction module is used to extract the multi-dimensional dynamic scene feature vector of the vehicle to be predicted through a preset information extraction neural network to obtain the traffic perception information of the vehicle to be predicted;
  • the encoding module is used to encode the traffic perception information and historical motion state data of the vehicle to be predicted through a preset time feature encoder to obtain the hidden state information of the vehicle to be predicted;
  • the prediction module is used to obtain the hybrid attention matrix of the vehicle to be predicted based on the hidden state information of the vehicle to be predicted, and assign weights to the hidden state information of the vehicle to be predicted through the hybrid attention matrix of the vehicle to be predicted, and then pass through the maximum pool in sequence ization processing and full connection processing to obtain the trajectory prediction value of the vehicle to be predicted.
  • a third aspect of the present invention is a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, the vehicle trajectory is realized. Prediction steps.
  • a fourth aspect of the present invention is a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the steps of vehicle trajectory prediction are implemented.
  • the present invention has the following beneficial effects:
  • the vehicle trajectory prediction method of the present invention uses the historical movement trajectory data of the vehicle to be predicted and each adjacent vehicle of the vehicle to be predicted, and performs feature extraction through a preset multi-dimensional dynamic scene feature extraction function to obtain the multi-dimensional dynamic scene feature vector of the vehicle to be predicted.
  • the traffic perception information of the vehicle to be predicted is obtained through the information extraction neural network, thereby representing the dynamic dependencies between the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted, and the driver's
  • the subjective driving intention is then passed through the temporal feature encoder to encode the traffic perception information and historical motion state data of the vehicle to be predicted to obtain the hidden state information of the vehicle to be predicted, and then based on the hidden state information of the vehicle to be predicted, a mixture of the vehicle to be predicted is obtained attention matrix, and finally allocate weights to the hidden state information of the vehicle to be predicted through the hybrid attention matrix of the vehicle to be predicted, and selectively reuse the historical trajectory information through the hybrid attention mechanism to improve the long-term prediction ability of the model, and then in turn Through maximum pooling processing and full connection processing, the trajectory prediction value of the vehicle to be predicted is obtained, making the vehicle trajectory prediction more reasonable and accurate.
  • Figure 1 is a flow chart of a vehicle trajectory prediction method according to an embodiment of the present invention
  • Figure 2 is a schematic diagram of the traffic scene and static coordinate system according to the embodiment of the present invention.
  • Figure 3 is a schematic diagram of the construction of a multi-dimensional dynamic scene feature map according to an embodiment of the present invention.
  • Figure 4 is a schematic structural diagram of an information extraction neural network according to an embodiment of the present invention.
  • Figure 5 is a schematic diagram of the principle of the hybrid attention mechanism according to an embodiment of the present invention.
  • Figure 6 is a schematic diagram of the prediction principle based on the hybrid attention mechanism according to the embodiment of the present invention.
  • Figure 7 is a schematic diagram of the principle of the vehicle trajectory prediction method according to the embodiment of the present invention.
  • Figure 8 is a data preprocessing flow chart according to the embodiment of the present invention.
  • Figure 9 is a schematic diagram of data distribution before normalization according to the embodiment of the present invention.
  • Figure 10 is a schematic diagram of normalized data distribution according to the embodiment of the present invention.
  • Figure 11 is a schematic diagram of the relationship between the initial learning rate and the number of iterations according to the embodiment of the present invention.
  • Figure 12 is a trajectory prediction diagram of the vehicle trajectory prediction method of the present invention according to the embodiment of the present invention.
  • Figure 13 is a trajectory prediction diagram of the existing transformer model according to the embodiment of the present invention.
  • Figure 14 is a schematic diagram of the relationship between the RMSE of a sample trajectory and the number of predicted frames according to an embodiment of the present invention
  • Figure 15 is a schematic diagram of the relationship between the MAEx of a sample trajectory and the number of predicted frames according to the embodiment of the present invention.
  • Figure 16 is a schematic diagram of the relationship between MAEy and the number of predicted frames for a sample trajectory according to an embodiment of the present invention
  • Figure 17 is a schematic diagram of the relationship between the RMSE of another sample trajectory and the number of predicted frames according to an embodiment of the present invention.
  • Figure 18 is a schematic diagram of the relationship between MAEx and the number of predicted frames of another sample trajectory according to an embodiment of the present invention.
  • Figure 19 is a schematic diagram of the relationship between MAEy and the number of predicted frames of another sample trajectory according to an embodiment of the present invention.
  • one embodiment of the present invention provides a vehicle trajectory prediction method, which solves the problem of low prediction accuracy and poor long-term prediction ability due to existing methods that do not consider the driver's subjective driving intention.
  • the vehicle trajectory prediction method includes the following steps:
  • S2 Through the preset multi-dimensional dynamic scene feature extraction function, extract the historical movement trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted, and obtain the multi-dimensional dynamic scene feature vector of the vehicle to be predicted.
  • S3 Extract the multi-dimensional dynamic scene feature vector of the vehicle to be predicted through the preset information extraction neural network, and obtain the traffic perception information of the vehicle to be predicted.
  • S4 Use the preset temporal feature encoder to encode the traffic perception information and historical motion state data of the vehicle to be predicted, and obtain the hidden state information of the vehicle to be predicted.
  • S5 According to the hidden state information of the vehicle to be predicted, obtain the hybrid attention matrix of the vehicle to be predicted, and assign weights to the hidden state information of the vehicle to be predicted through the hybrid attention matrix of the vehicle to be predicted, and then proceed through maximum pooling and Fully connected processing to obtain the trajectory prediction value of the vehicle to be predicted.
  • the vehicle trajectory prediction method of the present invention uses the historical movement trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted, and performs feature extraction through a preset multi-dimensional dynamic scene feature extraction function to obtain the multi-dimensional dynamics of the vehicle to be predicted.
  • Scene feature vector and based on the multi-dimensional dynamic scene feature vector of the vehicle to be predicted, the traffic perception information of the vehicle to be predicted is obtained through the information extraction neural network, and the dynamic dependence between the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted is represented.
  • the temporal feature encoder uses the temporal feature encoder to encode the traffic perception information and historical motion state data of the vehicle to be predicted to obtain the hidden state information of the vehicle to be predicted, and then based on the hidden state information of the vehicle to be predicted, the vehicle to be predicted is obtained
  • the hybrid attention matrix of the vehicle finally assigns weights to the hidden state information of the vehicle to be predicted through the hybrid attention matrix of the vehicle to be predicted.
  • the historical trajectory information is selectively reused through the hybrid attention mechanism to improve the long-term prediction ability of the model. , and then through maximum pooling processing and full connection processing in sequence, the trajectory prediction value of the vehicle to be predicted is obtained, making the vehicle trajectory prediction more reasonable and accurate.
  • FIG. 2 shows a feasible traffic scene and static coordinate system during the specific implementation of the vehicle trajectory prediction method of the present invention.
  • the traffic scene is a two-way eight-lane road, and a fixed reference frame is used to determine the position of each vehicle.
  • the starting points of both the x-axis and the y-axis are the upper left corner of the road.
  • the x-axis is parallel to the direction of vehicle movement on the highway, and the y-axis is perpendicular to the direction of travel.
  • the y-axis direction there are 1-8 lanes. Among them, vehicles in lanes 1-4 travel along the positive direction of the x-axis, and vehicles in lanes 5-8 travel in the negative direction of the x-axis.
  • a traffic scene is taken as an example for schematic description, but the sequence is not limited.
  • each of the adjacent vehicles of the vehicle to be predicted includes adjacent vehicles in eight directions: front, rear, left, right, front left, rear left, front right and rear right of the vehicle to be predicted.
  • extracting the historical motion trajectory data of the vehicle to be predicted and each adjacent vehicle of the vehicle to be predicted through a preset multi-dimensional dynamic scene feature extraction function, and obtaining the multi-dimensional dynamic scene feature vector of the vehicle to be predicted includes: using the preset The multi-dimensional dynamic scene feature extraction function extracts the position of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted in the driving direction and the vertical direction of the driving direction from the historical movement trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted. Data, speed data and acceleration data are used to obtain the multi-dimensional dynamic scene feature vector of the vehicle to be predicted.
  • the multi-dimensional dynamic scene feature vector of the vehicle to be predicted is extracted through the preset information extraction neural network.
  • the specific extraction process can be completed by constructing a multi-dimensional dynamic scene feature map.
  • the multi-dimensional dynamic scene feature map is constructed with the vehicle to be predicted as the center and the vehicles in eight directions around it, namely the front, rear, left, right, left front, left rear, right front and right rear of the predicted vehicle.
  • the 3*3 rectangular area is divided into 9 units, and adjacent vehicles are mapped to corresponding units according to their relative positions.
  • the position layer, velocity layer and acceleration layer in the X and Y directions are constructed respectively, forming a total of 6 rectangles with a dimension of 3*3, and finally superimposed to form a multi-dimensional dynamic scene feature map with a dimension of 6*3*3 .
  • the definition function of the feature map is Then the multi-dimensional dynamic scene characteristics at time t can be expressed as:
  • x t represents the motion trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted at time t.
  • the information extraction neural network includes a first convolution layer, a second convolution layer, a maximum pooling layer and a fully connected layer connected in sequence.
  • the first convolution layer is used to fuse the position data, speed data and acceleration data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted in the driving direction and the vertical direction of the driving direction, to obtain the vehicle to be predicted and the vehicle to be predicted.
  • the second convolutional layer is used to fuse the first fusion feature of the vehicle to be predicted and any adjacent vehicle among the adjacent vehicles of the vehicle to be predicted to obtain the second fusion feature;
  • the maximum pooling layer is used The second fusion feature is processed by maximum pooling;
  • the fully connected layer is used to fully connect the second fused feature after maximum pooling to obtain the traffic perception information of the vehicle to be predicted.
  • the first convolutional layer of 6*1*1 is used to perform feature fusion on the 6-dimensional features of each vehicle, which can capture the driving behavior of each vehicle. member’s intentions. Then, each vehicle interacts with surrounding vehicles through a second convolutional layer of 2*2 to extract the interaction between vehicles. Finally, the traffic force constraint information p that reflects the driver's driving intention and the interaction between vehicles is obtained through the maximum pooling layer Maxpool and the fully connected layer FC.
  • the above process is defined as a convolution operation in which the mapping function is traf, and the traffic perception information p t is extracted from the multi-dimensional dynamic scene feature vector F t at time t through the traf function:
  • the preset temporal feature encoder is a long short-term memory network (LSTM) encoder.
  • LSTM long short-term memory network
  • the forgetting gate f t in the LSTM encoder determines what information is discarded, the input gate i t controls what new information is updated and stored, and the output gate o t controls the output of the candidate layer, the value is the candidate unit, and the cell state c t of the LSTM encoder is the cell state c t-1 at the previous moment and the current candidate state
  • the final output gate o of the LSTM encoder will determine which cell states are output.
  • is the sigmoid function
  • is the multiplication of the corresponding elements of the matrix
  • x t and h t are the input vector and hidden layer state at time t respectively.
  • f, i and o are the forgetting gate, input gate and output gate respectively
  • W and b are model parameters
  • W fz is the forgetting gate input weight matrix
  • W fh is the forgetting gate hidden layer weight matrix
  • W iz is the input gate input weight matrix
  • W ih is the input gate hidden layer weight matrix
  • W oz is the output gate input weight matrix
  • W oh is the output gate hidden layer weight matrix
  • W cz is the cell state input weight matrix
  • W ch is the cell state hidden layer weight matrix
  • b f is the forget gate bias
  • b i is the input gate bias
  • b o is the output gate bias
  • b c is the cell state bias.
  • obtaining the hybrid attention matrix of the vehicle to be predicted based on the hidden state information of the vehicle to be predicted includes: obtaining the hybrid attention matrix ⁇ of the vehicle to be predicted by the following formula:
  • softmax is the normalized exponential function
  • ⁇ t is the time weight vector
  • ⁇ f is the feature weight vector
  • g t (H, h t+1 ) Hh T
  • g f (H, h t+1 ) h t+1 (W f H)
  • g f is the feature weight cosine correlation function
  • g t is the time weight cosine correlation function
  • W f is the feature matrix of the vehicle to be predicted in the historical T h frame
  • H is the hidden state information of the vehicle to be predicted in the historical T h frame
  • h t is the hidden state data of the vehicle to be predicted at time t
  • h t+1 is the hidden state data of the vehicle to be predicted at time t+1.
  • the vehicle trajectory prediction method of the present invention fuses temporal attention and feature attention.
  • the hybrid attention mechanism can comprehensively consider the moments and features that have a great impact on the output accuracy, and provide a prediction for each moment at each moment.
  • a feature independently assigns attention weights.
  • H is the hidden state information of the vehicle to be predicted in the historical Th frame, that is, the hidden state of the LSTM encoder.
  • the hidden state of the LSTM encoder is set to n dimensions, then Assuming that the hidden state at time t+1 is h t+1 ⁇ R 1 ⁇ n , then the time-weighted cosine correlation function g t and the feature weight cosine correlation function g f are used to calculate the relationship between H and h t+1 at time and feature Dimensional correlation.
  • the hidden state information of the vehicle to be predicted is assigned a weight through the hybrid attention matrix of the vehicle to be predicted, and then the trajectory prediction of the vehicle to be predicted is obtained through maximum pooling processing and full connection processing in sequence.
  • the values include:
  • the trajectory prediction value y t+1 of the vehicle to be predicted is obtained through the following formula:
  • is the multiplication of the corresponding elements of the matrix
  • o t is the most beneficial moment to improve the prediction accuracy
  • O i,j is the hidden state information of the vehicle to be predicted after assigning weights
  • h′ t+1 is the hidden state information of the vehicle to be predicted at time t+1 obtained by fully connecting o t , of f and h t+1 , and contact is fully connected processing.
  • W 1 and W 2 are the preset weights of the two fully connected layers respectively.
  • the implementation principle of the vehicle trajectory prediction method of the present invention integrates the traffic force-aware LSTM encoder and the LSTM decoder based on the hybrid attention mechanism.
  • x pre is the historical trajectory data of the vehicle to be predicted
  • z t and h t respectively represents the vector splicing result and hidden state of LSTM at time t.
  • the LSTM decoder gives the prediction result y t+1 at time t+1 .
  • the vehicle trajectory prediction method of the present invention includes the preset multi-dimensional dynamic scene feature extraction function, the preset information extraction neural network, the preset temporal feature encoder, and the maximum pooling and full connection processing.
  • the network layers are pre-trained to determine their specific parameters. Among them, during the pre-training process, the trajectory data of the vehicle in the past 50 frames (the vehicle's position data, speed data and acceleration data in the x and y directions) are used to predict the trajectory of the next 50 frames, so a total of 100 frames of complete data are needed. Through data cleaning, vehicle data that appears for less than 100 frames will be deleted.
  • German high-speed public data set highD was used for testing, and 110,660 trajectories were selected for model training and 50,787 trajectories were used for model testing.
  • Figures 9 and 10 show the data distribution before and after normalization in the data preprocessing.
  • Figure 11 shows the different initial learning rates used in different training stages during the model training process.
  • Figures 12 and 13 show the prediction results of 50 consecutive frames between the vehicle trajectory prediction method of the present invention and the existing Transformer model.
  • Figures 14 to Figures 19 is the comparison of various evaluation indicators between the vehicle trajectory prediction method of the present invention and the existing Transformer model.
  • the specific instructions are as follows:
  • FIG. 12 a comparison chart of trajectory prediction between the vehicle trajectory prediction method of the present invention and the existing Transformer model is shown.
  • the prediction results of this method are more accurate when the vehicle maneuvers laterally, while the Transformer model tends to offset the left and right maneuvers in the lateral direction; as can be seen from Figure 13, as the prediction length increases, , the predicted points of the Transformer model are further and further away from the real trajectory points, but the vehicle trajectory prediction method of the present invention can still make more accurate predictions.
  • hist represents the historical trajectory coordinates
  • gt represents the future real trajectory coordinates
  • ours represents the vehicle trajectory prediction method of the present invention
  • tf represents the Transformer model
  • the horizontal coordinate is the vehicle x-direction coordinate
  • the vertical coordinate is the vehicle y-direction coordinate.
  • FIG. 14 to 19 the changes of the evaluation indicators RMSE, MAEx and MAEy with the frame length are shown, where RMSE, MAEx and MAEy respectively represent the root mean square error, the mean absolute error in the x direction and the mean absolute error in the y direction,
  • the abscissa frame represents the time frame.
  • Figure 14, Figure 15 and Figure 16 respectively show the changes of RMSE, MAEx and MAEy of a sample trajectory with the frame length.
  • Figure 17, Figure 18 and Figure 19 respectively show the changes of RMSE, MAEx and MAEy with the frame length of another sample trajectory. It can be seen that as the prediction frame length increases, the RMSE of the Transformer model increases almost exponentially, and MAEx and MAEy also increase linearly.
  • the RMSE of the vehicle trajectory prediction method of the present invention can maintain a low level fluctuation. This shows that the long-term prediction ability of the vehicle trajectory prediction method of the present invention is strong; at the same time, the MAEx of the vehicle trajectory prediction method of the present invention is smaller than Transformer, which also shows that the prediction results of the vehicle trajectory prediction method of the present invention are more accurate when the vehicle is maneuvering sideways.
  • a vehicle trajectory prediction system which can be used to implement the above vehicle trajectory prediction method.
  • the vehicle trajectory prediction system includes a data acquisition module, a data preprocessing module, an information extraction module, and a coding module. modules as well as prediction modules.
  • the data acquisition module is used to obtain the historical movement trajectory data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted;
  • the data preprocessing module is used to extract the vehicle to be predicted and the vehicles to be predicted through a preset multi-dimensional dynamic scene feature extraction function
  • the historical motion trajectory data of each adjacent vehicle is used to obtain the multi-dimensional dynamic scene feature vector of the vehicle to be predicted;
  • the information extraction module is used to extract the multi-dimensional dynamic scene feature vector of the vehicle to be predicted through the preset information extraction neural network to obtain the multi-dimensional dynamic scene feature vector of the vehicle to be predicted.
  • the encoding module is used to encode the traffic perception information and historical motion state data of the vehicle to be predicted through a preset time feature encoder to obtain the hidden state information of the vehicle to be predicted;
  • the prediction module is used to obtain the hidden state information of the vehicle to be predicted; State information, obtain the hybrid attention matrix of the vehicle to be predicted, and assign weights to the hidden state information of the vehicle to be predicted through the hybrid attention matrix of the vehicle to be predicted, and then sequentially perform maximum pooling processing and full connection processing to obtain the vehicle to be predicted trajectory prediction value.
  • each of the adjacent vehicles of the vehicle to be predicted includes adjacent vehicles in eight directions: front, rear, left, right, front left, rear left, front right and rear right of the vehicle to be predicted.
  • the historical motion trajectory data of the vehicle to be predicted and each adjacent vehicle of the vehicle to be predicted is extracted through a preset multi-dimensional dynamic scene feature extraction function, and a multi-dimensional dynamic scene feature vector of the vehicle to be predicted is obtained.
  • the position data, speed data and acceleration data in the vertical direction are used to obtain the multi-dimensional dynamic scene feature vector of the vehicle to be predicted.
  • the information extraction neural network includes a first convolution layer, a second convolution layer, a maximum pooling layer and a fully connected layer connected in sequence; wherein the first convolution layer is used for fusion
  • the position data, speed data and acceleration data of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted in the traveling direction and the vertical direction of the traveling direction are used to obtain the first fusion characteristics of the vehicle to be predicted and the adjacent vehicles of the vehicle to be predicted;
  • the two convolutional layers are used to fuse the first fusion features of the vehicle to be predicted and any adjacent vehicles among the adjacent vehicles of the vehicle to be predicted to obtain the second fusion feature;
  • the maximum pooling layer is used to process the second fusion feature by maximum pooling;
  • the fully connected layer is used to fully connect the second fusion feature after the maximum pooling process to obtain the traffic perception information of the vehicle to be predicted.
  • the preset temporal feature encoder is a long short-term memory network encoder.
  • obtaining the hybrid attention matrix of the vehicle to be predicted based on the hidden state information of the vehicle to be predicted includes: obtaining the hybrid attention matrix ⁇ of the vehicle to be predicted by the following formula:
  • softmax is the normalized exponential function
  • ⁇ t is the time weight vector
  • ⁇ f is the feature weight vector
  • g t (H, h t+1 ) Hh T
  • g f (H, h t+1 ) h t+1 (W f H)
  • g f is the feature weight cosine correlation function
  • g t is the time weight cosine correlation function
  • W f is the feature matrix of the vehicle to be predicted in the historical T h frame
  • H is the hidden state information of the vehicle to be predicted in the historical T h frame
  • h t is the hidden state data of the vehicle to be predicted at time t
  • h t+1 is the hidden state data of the vehicle to be predicted at time t+1.
  • the hidden state information of the vehicle to be predicted is assigned a weight through the hybrid attention matrix of the vehicle to be predicted, and then the trajectory prediction of the vehicle to be predicted is obtained through maximum pooling processing and full connection processing in sequence.
  • the values include:
  • the trajectory prediction value y t+1 of the vehicle to be predicted is obtained through the following formula:
  • is the multiplication of the corresponding elements of the matrix
  • o t is the most beneficial moment to improve the prediction accuracy
  • O i,j is the hidden state information of the vehicle to be predicted after assigning weights
  • h′ t+1 is the hidden state information of the vehicle to be predicted at time t+1 obtained by fully connecting o t , of f and h t+1 , and contact is fully connected processing.
  • W 1 and W 2 are preset weights.
  • each step involved in the foregoing embodiments of the vehicle trajectory prediction method can be referred to the functional description of the corresponding functional modules of the vehicle trajectory prediction system in the embodiments of the present invention, and will not be described again here.
  • the division of modules in the embodiments of the present invention is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • each functional module in each embodiment of the present invention may be integrated into one processing unit. In the device, it can exist physically alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software function modules.
  • a computer device in yet another embodiment, includes a processor and a memory.
  • the memory is used to store a computer program.
  • the computer program includes program instructions.
  • the processor is used to execute the computer program.
  • a storage medium stores program instructions.
  • the processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or off-the-shelf programmable gate array (Field-Programmable GateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., which are the computing core and control core of the terminal, and are suitable for implementing one or more instructions, specifically suitable for Load and execute one or more instructions in the computer storage medium to implement the corresponding method flow or corresponding functions; the processor described in the embodiment of the present invention can be used to operate the vehicle trajectory prediction method.
  • CPU Central Processing Unit
  • DSP digital signal processor
  • ASIC
  • the present invention also provides a storage medium, specifically a computer-readable storage medium (Memory).
  • the computer-readable storage medium is a memory device in a computer device and is used to store programs and data.
  • the computer-readable storage medium here may include a built-in storage medium in the computer device, and of course may also include an extended storage medium supported by the computer device.
  • the computer-readable storage medium provides storage space, and the storage space stores the operating system of the terminal.
  • one or more instructions suitable for being loaded and executed by the processor are also stored in the storage space. These instructions may be one or more computer programs (including program codes).
  • the computer-readable storage medium here may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • One or more instructions stored in the computer-readable storage medium can be loaded and executed by the processor to implement the corresponding steps of the vehicle trajectory prediction method in the above embodiments.
  • embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
  • Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention se rapporte au domaine de la simulation de conduite autonome. L'invention concerne un procédé et un système de prédiction de la trajectoire d'un véhicule, un dispositif informatique et un support de stockage. Le procédé comprend : l'extraction des données historiques de trajectoire de mouvement d'un véhicule à prédire et des véhicules adjacents du véhicule à prédire pour obtenir un vecteur de caractéristiques dynamiques multidimensionnelles de la scène du véhicule à prédire ; l'extraction du vecteur de caractéristiques dynamiques multidimensionnelles de la scène du véhicule à prédire pour obtenir des informations de détection du trafic du véhicule à prédire ; l'encodage des informations de détection du trafic et des données historiques d'état de mouvement du véhicule à prédire pour obtenir des informations d'état caché du véhicule à prédire ; et l'obtention, en fonction des informations d'état caché du véhicule à prédire, d'une matrice d'attention hybride du véhicule à prédire, l'attribution d'un poids aux informations d'état caché du véhicule à prédire au moyen de la matrice d'attention hybride du véhicule à prédire, puis l'exécution séquentielle d'un traitement de mise en commun maximale (« maximum pooling ») et d'un traitement de connexion complète pour obtenir une valeur de prédiction de trajectoire du véhicule à prédire, de sorte que la précision de la prédiction de trajectoire du véhicule est effectivement améliorée.
PCT/CN2022/119688 2022-05-19 2022-09-19 Procédé et système de prédiction de trajectoire de véhicule, dispositif informatique et support de stockage WO2023221348A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210545736.1A CN114881339A (zh) 2022-05-19 2022-05-19 车辆轨迹预测方法、系统、计算机设备及存储介质
CN202210545736.1 2022-05-19

Publications (1)

Publication Number Publication Date
WO2023221348A1 true WO2023221348A1 (fr) 2023-11-23

Family

ID=82676978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/119688 WO2023221348A1 (fr) 2022-05-19 2022-09-19 Procédé et système de prédiction de trajectoire de véhicule, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN114881339A (fr)
WO (1) WO2023221348A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493424A (zh) * 2024-01-03 2024-02-02 湖南工程学院 一种不依赖地图信息的车辆轨迹预测方法
CN117787525A (zh) * 2024-02-23 2024-03-29 煤炭科学技术研究院有限公司 基于井下多个目标对象的轨迹预测方法和预警方法
CN117775078A (zh) * 2024-02-28 2024-03-29 山西阳光三极科技股份有限公司 一种基于深度学习的矿内货运列车行驶方向判断方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881339A (zh) * 2022-05-19 2022-08-09 长安大学 车辆轨迹预测方法、系统、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160101779A1 (en) * 2013-05-31 2016-04-14 Toyota Jidosha Kabushiki Kaisha Movement trajectory predicting device and movement trajectory predicting method
CN111046919A (zh) * 2019-11-21 2020-04-21 南京航空航天大学 一种融合行为意图的周围动态车辆轨迹预测系统及方法
US20200324794A1 (en) * 2020-06-25 2020-10-15 Intel Corporation Technology to apply driving norms for automated vehicle behavior prediction
CN114372570A (zh) * 2021-12-14 2022-04-19 同济大学 一种多模态车辆轨迹预测方法
CN114881339A (zh) * 2022-05-19 2022-08-09 长安大学 车辆轨迹预测方法、系统、计算机设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160101779A1 (en) * 2013-05-31 2016-04-14 Toyota Jidosha Kabushiki Kaisha Movement trajectory predicting device and movement trajectory predicting method
CN111046919A (zh) * 2019-11-21 2020-04-21 南京航空航天大学 一种融合行为意图的周围动态车辆轨迹预测系统及方法
US20200324794A1 (en) * 2020-06-25 2020-10-15 Intel Corporation Technology to apply driving norms for automated vehicle behavior prediction
CN114372570A (zh) * 2021-12-14 2022-04-19 同济大学 一种多模态车辆轨迹预测方法
CN114881339A (zh) * 2022-05-19 2022-08-09 长安大学 车辆轨迹预测方法、系统、计算机设备及存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493424A (zh) * 2024-01-03 2024-02-02 湖南工程学院 一种不依赖地图信息的车辆轨迹预测方法
CN117493424B (zh) * 2024-01-03 2024-03-22 湖南工程学院 一种不依赖地图信息的车辆轨迹预测方法
CN117787525A (zh) * 2024-02-23 2024-03-29 煤炭科学技术研究院有限公司 基于井下多个目标对象的轨迹预测方法和预警方法
CN117787525B (zh) * 2024-02-23 2024-05-14 煤炭科学技术研究院有限公司 基于井下多个目标对象的轨迹预测方法和预警方法
CN117775078A (zh) * 2024-02-28 2024-03-29 山西阳光三极科技股份有限公司 一种基于深度学习的矿内货运列车行驶方向判断方法
CN117775078B (zh) * 2024-02-28 2024-05-07 山西阳光三极科技股份有限公司 一种基于深度学习的矿内货运列车行驶方向判断方法

Also Published As

Publication number Publication date
CN114881339A (zh) 2022-08-09

Similar Documents

Publication Publication Date Title
WO2023221348A1 (fr) Procédé et système de prédiction de trajectoire de véhicule, dispositif informatique et support de stockage
US11899411B2 (en) Hybrid reinforcement learning for autonomous driving
Dai et al. Modeling vehicle interactions via modified LSTM models for trajectory prediction
CN111915663B (zh) 图像深度预测神经网络
CN113272830B (zh) 行为预测系统中的轨迹表示
CN112015847B (zh) 一种障碍物的轨迹预测方法、装置、存储介质及电子设备
Jeon et al. Traffic scene prediction via deep learning: Introduction of multi-channel occupancy grid map as a scene representation
CN113807460B (zh) 智能体动作的确定方法和装置、电子设备和介质
CN111738037A (zh) 一种自动驾驶方法及其系统、车辆
CN113362491A (zh) 一种车辆轨迹预测及驾驶行为分析方法
CN114323054A (zh) 自动驾驶车辆行驶轨迹的确定方法、装置及电子设备
CN115984586A (zh) 一种鸟瞰视角下的多目标跟踪方法及装置
CN114997307A (zh) 一种轨迹预测方法、装置、设备及存储介质
Friji et al. A dqn-based autonomous car-following framework using rgb-d frames
Chen et al. Social relation and physical lane aggregator: Integrating social and physical features for multimodal motion prediction
Zuo et al. Trajectory prediction network of autonomous vehicles with fusion of historical interactive features
Wang et al. Bevgpt: Generative pre-trained large model for autonomous driving prediction, decision-making, and planning
Yang et al. Vehicle trajectory prediction based on LSTM network
Masmoudi et al. Autonomous car-following approach based on real-time video frames processing
CN116324902A (zh) 检测对象和确定对象的行为
Aditya et al. Collision Detection: An Improved Deep Learning Approach Using SENet and ResNext
CN115937801A (zh) 基于图卷积的车辆轨迹预测方法及装置
CN113119996B (zh) 一种轨迹预测方法、装置、电子设备及存储介质
CN115062202A (zh) 驾驶行为意图及轨迹的预测方法、装置、设备及存储介质
Bhaggiaraj et al. Deep Learning Based Self Driving Cars Using Computer Vision

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22942376

Country of ref document: EP

Kind code of ref document: A1