CN115719479A - Track prediction method based on encoder-decoder architecture - Google Patents
Track prediction method based on encoder-decoder architecture Download PDFInfo
- Publication number
- CN115719479A CN115719479A CN202211524442.7A CN202211524442A CN115719479A CN 115719479 A CN115719479 A CN 115719479A CN 202211524442 A CN202211524442 A CN 202211524442A CN 115719479 A CN115719479 A CN 115719479A
- Authority
- CN
- China
- Prior art keywords
- network
- attention
- information
- neural network
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 239000013598 vector Substances 0.000 claims abstract description 60
- 238000013528 artificial neural network Methods 0.000 claims abstract description 32
- 230000000306 recurrent effect Effects 0.000 claims abstract description 29
- 230000015654 memory Effects 0.000 claims abstract description 26
- 230000007246 mechanism Effects 0.000 claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 10
- 238000010586 diagram Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 15
- 239000000126 substance Substances 0.000 claims description 7
- 230000006403 short-term memory Effects 0.000 claims description 6
- 230000002776 aggregation Effects 0.000 claims description 4
- 238000004220 aggregation Methods 0.000 claims description 4
- 230000007787 long-term memory Effects 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a track prediction method based on an encoder-decoder framework, which relates to the technical field of automatic driving and comprises the following steps: coding the road picture based on a convolutional neural network to obtain road network characteristic information, and combining the road network characteristic information with object characteristic information to obtain an extracted vector diagram; inputting the extracted vector graph into an attention network with an attention mechanism, outputting to obtain an extracted vector graph with attention information, inputting the extracted vector graph into a first long-short term memory recurrent neural network, and outputting to obtain first result data; inputting the first result data into the decoder graph attention network to obtain a second result with attention; and inputting the second result data into a second long-short term memory recurrent neural network, and outputting to obtain a track prediction result, wherein a loss function of the second long-short term memory recurrent neural network comprises position information loss and collision area constraint. The invention has high track prediction efficiency, high prediction precision and good reliability.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a track prediction method based on an encoder-decoder framework.
Background
In the field of automatic driving, accurate sensing and prediction of traffic environment are realized, and the premise of ensuring safe and efficient operation of an automatic driving vehicle is provided. In the face of complex and variable traffic environments, the automatic driving vehicle needs to acquire surrounding traffic environment information, road network information and the like in real time, and a safe and efficient running track is decided through an algorithm, so that reasonable track prediction needs to be performed on the surrounding traffic environments. The target of the trajectory prediction is generally a pedestrian, a vehicle, a bicycle, and the like.
The conventional trajectory prediction method includes: a rule-based trajectory prediction method and a data-driven trajectory prediction method. Many conventional rule-based trajectory prediction methods have high accuracy in short-distance prediction by establishing a kinematic model, but the complex and variable traffic environment has high nonlinearity, so that the model-based trajectory prediction method is difficult to obtain high accuracy in long-distance prediction, and cannot meet the guarantee of efficient safety of an automatic driving vehicle. Data-driven trajectory prediction methods typically employ long-short-term memory neural networks (LSTM) and Recurrent Neural Networks (RNNs) to predict future trajectories based on historical trajectory data. In the LSTM-based track prediction method, track prediction is regarded as a sequence learning and generation task for modeling, and the interaction of the node state of the track prediction and the node states of surrounding pedestrians or vehicles is utilized to perform track prediction, so that a good effect is achieved. In addition, the existing model also introduces an attention mechanism on the basis of the LSTM, so that the track prediction precision is further improved.
The existing automatic driving track prediction method has poor accuracy, but has the following defects: the conventional track prediction method based on data driving lacks support of road network information in a model, but the road network information can provide important data support for the model in track prediction, so that the method cannot meet the requirement of predicting the current complex traffic environment information with higher precision; the existing model lacks safety constraint, the condition that the predicted vehicle or pedestrian collides is not considered, the condition that the collision occurs is non-safety and rarely occurs, the prediction precision of the model is reduced due to the lack of safety constraint, and the requirement of automatic driving on high efficiency and safety cannot be met.
Disclosure of Invention
The present invention is directed to a method for trajectory prediction based on an encoder-decoder architecture, which alleviates the above-mentioned problems.
In order to alleviate the above problems, the technical scheme adopted by the invention is as follows:
the invention provides a track prediction method based on an encoder-decoder framework, which is characterized by comprising the following steps of:
s1, extracting the features of a road picture based on a convolutional neural network to obtain road network feature information, acquiring all object feature information in a road captured by a radar, and obtaining an extracted vector graph based on the object feature information and the road network feature information;
s2, inputting the extracted vector diagram into a diagram attention network with an attention mechanism, and outputting to obtain a feature vector with attention information;
s3, inputting the feature vector with the attention information into a first long-short term memory recurrent neural network, and outputting to obtain first result data;
s4, inputting the first result data into a decoder graph attention network, and performing information aggregation on the first result data to obtain second result data;
and S5, inputting the second result data into a second long-short term memory recurrent neural network, and outputting to obtain a track prediction result, wherein a loss function of the second long-short term memory recurrent neural network comprises position information loss and collision area constraint.
In a preferred embodiment of the present invention, the step S1 includes the following steps:
s11, inputting the road picture into a convolutional neural network, and outputting to obtain road network characteristic information;
s12, acquiring all object characteristic information in the road picture captured by the radar, numbering type information in the object characteristic information by adopting one-hot coding, and obtaining object characteristic vectors;
s13, expanding the object feature vectors by adopting a full-connection network;
and S14, merging the expanded object feature vectors and the road network feature information to obtain an extracted vector diagram.
In a preferred embodiment of the present invention, in step S13, the expansion formula is:
wherein the content of the first and second substances,representing the expanded i-th vector, W 1 Represents the weight coefficients of the fully-connected layer,the feature vector of the ith object is represented, and a represents the dot product operation of the expansion vector.
In a preferred embodiment of the present invention, in step S14, there are a plurality of extracted vector graphs respectively representing states at different times, where the states include the object feature vectors and the road network feature information.
In a preferred embodiment of the present invention, in step S2, the calculation formula of the feature vector with attention information is
Wherein the content of the first and second substances,representing an attention weight vector, W 2 Representing the coefficients in the attention network with attention mechanism, and σ represents the dot product operation of the vector.
In a preferred embodiment of the present invention, the long and short term memory recurrent neural network comprises an input gate, a forgetting gate, an output gate, a hidden layer and a cell unit.
In a preferred embodiment of the present invention, in step S3, the first result data
h t =LSTM(v t ,h t-1 ;W 3 )
Wherein v is t Represents the vector, W, resulting from embedding all the eigenvectors in the matrix at time t 3 Representing the weight coefficients in the first long-short term memory recurrent neural network.
In a preferred embodiment of the present invention, in step S4, the second result data
g t =G(h t ;W 4 )
Wherein, W 4 Representing the weight parameters in the decoder graph attention network, and G representing the dot product operation in the decoder graph attention network.
In a preferred embodiment of the present invention, in step S5, the first track prediction result
h t+1 From g t Input get, h 0 To hide the initial state of the layer, W 5 Representing the weight coefficients of the second long-short term memory recurrent neural network,
in the formation of h t+1 Thereafter, the self-loop is started to obtain the track prediction result of other arbitrary prediction time, i.e.
h t+x =LSTM(h t+x ,h t+x-1 ;W 5 )
Wherein h is t+x Representing the prediction node at time t + xFruit, x>1,h t+x-1 Representing the predicted result at the previous time.
In a preferred embodiment of the present invention, in step S5, the loss function of the second long-short term memory recurrent neural network is
Wherein the content of the first and second substances,for a loss of position at time t, S collision Is the area of impact.
Compared with the prior art, the invention has the beneficial effects that:
by adopting an encoder-decoder framework, the efficiency of predicting the track is improved; road network information and characteristic information are jointly embedded into the vector, so that the reliability of the track prediction is improved; an attention mechanism is applied to the vector, so that the result of the track prediction is more accurate; and the constraint of collision area is added into the loss function, so that the loss value is converged more quickly, and the prediction precision is further improved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of the Lstm network of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
Referring to fig. 1, the present invention discloses a track prediction method based on an encoder-decoder architecture, which specifically comprises the following operation processes:
1. constructing an extracted vector graph
In the invention, a convolutional neural network is adopted, and the road picture is coded by using the superposition of three layers, including a convolutional layer, a pooling layer and a full-link layer. The convolutional layer is used for extracting road network characteristics, the pooling layer is used for reducing the dimensionality of the road network characteristics so that the calculated amount is reduced, and finally the full-connection layer is used for obtaining a desired output result, namely the whole road network characteristic information.
At the current moment, firstly, inputting a road picture into a Convolutional Neural Network (CNN) to obtain a multi-dimensional vector, wherein the vector is road network characteristic information and is used as e t And representing the road network characteristic information of the t-th frame.
e t =CNN(picture)
The CNN includes the use of convolutional layers, pooling layers, and full link layers.
In the object feature information (captured by radar), the type information is sequentially encoded at this time, and in order to obtain a better effect, it is numbered by using one-hot encoding, that is, it is encoded as [0,0,0,1] with type 4, thus obtaining a 4-dimensional vector.
In this case, the dimensions of the object feature information and the road network feature information are not equal to each other so as to make the object appearThe body characteristic and the road network characteristic can form an extraction graph, and the dimension of the object characteristic information is expanded by adopting a full connection layer. For the ith vector after expansionIs shown by W 1 Representing the weight coefficients of the fully connected layer.
After all object feature vectors of the current frame are expanded, the object feature vectors and road network feature information are combined into an extraction graph which is represented as an edge set and a point set, and the point set is object feature vector information. The set of points is represented asIs a matrix.
The matrix stores road network characteristic information and current all-object characteristic information (such as vehicles, pedestrians and the like). When the track is predicted, information of a time period needs to be observed for prediction, so that a plurality of matrixes obtained above exist, and each matrix represents road network characteristic information of each frame and all object characteristic information of a current frame. The multiple matrixes and the collection of the edge sets jointly form multiple extraction graphs, and the multiple matrixes are represented by A = { A = } t-r+1 ,A t-r+2 ,...,A t }。A t Is the matrix at time t.
2. Encoder layer
The encoder layer includes a Graph Attention Network (GAT) with Attention mechanism and a first long-short term memory recurrent neural Network.
The graph attention network with the attention mechanism comprises aggregation of input feature vectors, and the attention mechanism is applied to the graph structure, so that the trajectory prediction can obtain a more accurate result. The road network characteristic information obtained by the last convolution neural network and the characteristic vector of the object are input into the graph attention network, the characteristic vector after attention is paid can be obtained, and the characteristic vector is used as the input of the next layer of long-short term memory recurrent neural network.
Attention Mechanism (Attention Mechanism) is a resource allocation scheme that allocates computing resources to more important tasks while solving the information overload problem in situations where computing power is limited. In neural network learning, generally speaking, the more parameters of a model, the stronger the expression ability of the model, and the larger the amount of information stored by the model, but this may cause a problem of information overload. Then, by introducing an attention mechanism, the information which is more critical to the current task is focused in a plurality of input information, the attention degree to other information is reduced, and even irrelevant information is filtered, so that the problem of information overload can be solved, and the efficiency and the accuracy of task processing are improved. After obtaining the above-mentioned multiple matrices corresponding to the multiple frames, we apply the attention mechanism to each matrix, i.e. input into the attention network, we will obtain each matrix with attention, and these matrices can play a key role in the prediction after we. By applying the graph attention network to each feature vector in our matrix, the matrix with attention can be obtained, i.e. the attention network is applied to the feature vectors in the matrix
Wherein, the first and the second end of the pipe are connected with each other,representing an attention weight vector, W 2 Representing the coefficients in the attention network with attention mechanism, σ represents the dot product operation of the vector.
FIG. 2 is a detailed diagram of the network structure of a Long short-term memory recurrent neural network (LSTM), in which the inner layer structure comprises an input gate, a forgetting gate, an output gate, a hidden layer, a cell unit, and x t-1 、x t Equal to input, h t-1 、h t Etc. are hidden layer outputs. In the first long short term memory network, we normally input multiple matrices directly to the LSTM, i.e., input x t-1 Matrix A corresponding to time t-1 t-1 . In the second long/short term memory network, there is only one matrix g t The LSTM self-loop operation is required, i.e. the input of the LSTM self-loop operation into the second long-short term memory network, the first output h is obtained 1 . At this time, h is 1 As the next input x 2 And so on, the output of the previous layer is taken as the input of the next layer, i.e. x t =h t-1 And circulating to obtain an output result.
Feature vectors with attention information obtained from the attention network with the attention mechanism are input into the first long-term and short-term memory recurrent neural network, so that long-term rules can be learned, and more accurate predicted trajectories can be obtained.
Inputting the matrix obtained just after GAT into the LSTM network, which means that we have observed the data in the time interval of [ t-r + 1,t ], i.e. the first result data, the calculation formula is
h t =LSTM(v t ,h t-1 ;W 3 )。
Wherein v is t For vectors obtained after embedding all the eigenvectors in the matrix at time t, W 3 Representing the weight coefficients in the first long-short term memory recurrent neural network.
3. Decoder layer
The decoder layer includes a decoder graph attention network and a second long-short term memory recurrent neural network.
And performing another set of aggregation on the information on the first result data through the decoder graph attention network to obtain more stable information, namely second result data, wherein the calculation formula is as follows:
g t =G(h t ;W 4 )
wherein, W 4 Representing the weight parameters in the decoder graph attention network, and G representing the dot product operation in the decoder graph attention network.
And the second result data is input as an output to a long-short term memory recurrent neural network (second long-short term memory recurrent neural network) of the next layer decoder.
The second long-short term memory recurrent neural network carries out self-circulation on the attention network information (second result data) obtained in the previous round to obtain the predicted time [ t +1, t + k ]]The trajectory prediction result of (1). The track prediction result obtained at the t +1 th moment is determined by the result of the previous layer, h 0 Representing the initial parameter, W, of the LSTM 5 Representing the weighting coefficients of the LSTM.
h t+1 =LSTM(g t ,h 0 ;W 5 )
And the track prediction results at other moments are obtained by self-circulation of the second long-short term memory recurrent neural network.
h t+x =LSTM(h t+x ,h t+x-1 ;W 6 ) Wherein x is>1。
The result of the trajectory prediction, i.e. h t+x And then connecting a fully connected linear layer to output the position of the prediction result.
(x,y)=Linear(h t+x ;W 7 )
Line stands for Linear layer and the position is represented by a two-dimensional vector (x, y), i.e. the x and y coordinates of the object.
The loss function of the second long-short term memory recurrent neural network not only comprises the loss of the position information, but also takes the collision area as a safety constraint, so that a more accurate prediction result can be obtained.
Wherein the content of the first and second substances,for a loss of position at time t, S collision Is the impact area.
The invention discloses a high-efficiency image learning track prediction method with safety constraint, which realizes track prediction with road network information and safety constraint, improves the accuracy and safety of track prediction, improves the prediction efficiency, realizes prediction of various targets and can be expanded to more types.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method for trajectory prediction based on an encoder-decoder architecture, comprising the steps of:
s1, extracting the features of a road picture based on a convolutional neural network to obtain road network feature information, acquiring all object feature information in a road captured by a radar, and obtaining an extracted vector graph based on the object feature information and the road network feature information;
s2, inputting the extracted vector diagram into an attention network with an attention mechanism, and outputting to obtain a feature vector with attention information;
s3, inputting the feature vector with the attention information into a first long-short term memory recurrent neural network, and outputting to obtain first result data;
s4, inputting the first result data into a decoder graph attention network, and performing information aggregation on the first result data to obtain second result data;
and S5, inputting the second result data into a second long-short term memory recurrent neural network, and outputting to obtain a track prediction result, wherein a loss function of the second long-short term memory recurrent neural network comprises position information loss and collision area constraint.
2. The encoder-decoder architecture based trajectory prediction method of claim 1, wherein step S1 comprises the steps of:
s11, inputting the road picture into a convolutional neural network, and outputting to obtain road network characteristic information;
s12, acquiring all object characteristic information in the road picture captured by the radar, numbering the type information of the object characteristic information by adopting one-hot coding, and obtaining object characteristic vectors;
s13, expanding the object feature vectors by adopting a full-connection network;
and S14, merging the expanded object feature vectors and the road network feature information to obtain an extracted vector diagram.
3. The method of claim 2, wherein in step S13, the expansion formula is:
4. The method of claim 3, wherein in step S14, the extracted vector graphs are multiple and respectively represent states at different times, and the states include object feature vectors and road network feature information.
5. The method of claim 4, wherein the feature vector with attention information is calculated as
6. The method of claim 5, wherein the long-term and short-term memory recurrent neural network comprises an input gate, a forgetting gate, an output gate, a hidden layer and a cell unit.
7. The encoder-decoder architecture based track prediction method as claimed in claim 6, wherein in step S3, the first result data is obtained
h t =LSTM(v t ,h t-1 ;W 3 )
Wherein v is t Represents the vector, W, resulting from embedding all the eigenvectors in the matrix at time t 3 Representing the weight coefficients in the first long-short term memory recurrent neural network.
8. The encoder-decoder architecture based track prediction method as claimed in claim 7, wherein in step S4, the second result data
g t =G(h t ;W 4 )
Wherein, W 4 Representing the weight parameters in the decoder graph attention network, and G representing the dot product operation in the decoder graph attention network.
9. The encoder-decoder architecture based track prediction method of claim 8, wherein in step S5, the first track prediction result is
h t+1 =LSTM(g t ,h 0 ;W 5 )
h t+1 From g t Input get, h 0 To hide the initial state of the layer, W 5 Weight coefficients representing a second long-short term memory recurrent neural network,
In the formation of h t+1 Thereafter, the self-loop is started to obtain the track prediction result of other arbitrary prediction time, i.e.
h t+x =LSTM(h t+x ,h t+x-1 ;W 5 )
Wherein h is t+x Representing the prediction at time t + x, x>1,h t+x-1 Representing the predicted result at the previous time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524442.7A CN115719479A (en) | 2022-11-30 | 2022-11-30 | Track prediction method based on encoder-decoder architecture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524442.7A CN115719479A (en) | 2022-11-30 | 2022-11-30 | Track prediction method based on encoder-decoder architecture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115719479A true CN115719479A (en) | 2023-02-28 |
Family
ID=85257190
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211524442.7A Pending CN115719479A (en) | 2022-11-30 | 2022-11-30 | Track prediction method based on encoder-decoder architecture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115719479A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101951595B1 (en) * | 2018-05-18 | 2019-02-22 | 한양대학교 산학협력단 | Vehicle trajectory prediction system and method based on modular recurrent neural network architecture |
CN112347923A (en) * | 2020-11-06 | 2021-02-09 | 常州大学 | Roadside end pedestrian track prediction algorithm based on confrontation generation network |
CN114462667A (en) * | 2021-12-20 | 2022-05-10 | 上海智能网联汽车技术中心有限公司 | SFM-LSTM neural network model-based street pedestrian track prediction method |
-
2022
- 2022-11-30 CN CN202211524442.7A patent/CN115719479A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101951595B1 (en) * | 2018-05-18 | 2019-02-22 | 한양대학교 산학협력단 | Vehicle trajectory prediction system and method based on modular recurrent neural network architecture |
CN112347923A (en) * | 2020-11-06 | 2021-02-09 | 常州大学 | Roadside end pedestrian track prediction algorithm based on confrontation generation network |
CN114462667A (en) * | 2021-12-20 | 2022-05-10 | 上海智能网联汽车技术中心有限公司 | SFM-LSTM neural network model-based street pedestrian track prediction method |
Non-Patent Citations (2)
Title |
---|
张志远等: "结合社会特征和注意力的行人轨迹预测模型", 西安电子科技大学学报, vol. 47, no. 1, 29 February 2020 (2020-02-29), pages 10 - 17 * |
李琳辉等: "基于社会注意力机制的行人轨迹预测方法研究", 通信学报, vol. 41, no. 6, 30 June 2020 (2020-06-30), pages 175 - 183 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112116030A (en) | Image classification method based on vector standardization and knowledge distillation | |
CN113362491B (en) | Vehicle track prediction and driving behavior analysis method | |
Mahjourian et al. | Geometry-based next frame prediction from monocular video | |
CN112508173A (en) | Traffic space-time sequence multi-step prediction method, system and storage medium | |
CN111652903A (en) | Pedestrian target tracking method based on convolution correlation network in automatic driving scene | |
CN113468978B (en) | Fine granularity car body color classification method, device and equipment based on deep learning | |
CN114677412B (en) | Optical flow estimation method, device and equipment | |
CN111797970B (en) | Method and device for training neural network | |
CN114283347B (en) | Target detection method, system, intelligent terminal and computer readable storage medium | |
CN114418030A (en) | Image classification method, and training method and device of image classification model | |
CN114565812A (en) | Training method and device of semantic segmentation model and semantic segmentation method of image | |
CN113537462A (en) | Data processing method, neural network quantization method and related device | |
CN112766603A (en) | Traffic flow prediction method, system, computer device and storage medium | |
EP3696734A1 (en) | Tensor-train recurrent neural network supported autonomous and assisted driving | |
CN114116944A (en) | Trajectory prediction method and device based on time attention convolution network | |
CN115113165A (en) | Radar echo extrapolation method, device and system | |
CN114462578A (en) | Method for improving forecast precision of short rainfall | |
CN112801029A (en) | Multi-task learning method based on attention mechanism | |
CN116403190A (en) | Track determination method and device for target object, electronic equipment and storage medium | |
CN115719479A (en) | Track prediction method based on encoder-decoder architecture | |
CN116258253A (en) | Vehicle OD prediction method based on Bayesian neural network | |
CN115331460A (en) | Large-scale traffic signal control method and device based on deep reinforcement learning | |
CN112818846A (en) | Video frame feature extraction method and device and electronic equipment | |
WO2023206532A1 (en) | Prediction method and apparatus, electronic device and computer-readable storage medium | |
Liu et al. | Knowledge-aware Graph Transformer for Pedestrian Trajectory Prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |