CN112465273B - Unmanned vehicle track prediction method based on local attention mechanism - Google Patents

Unmanned vehicle track prediction method based on local attention mechanism Download PDF

Info

Publication number
CN112465273B
CN112465273B CN202011560297.9A CN202011560297A CN112465273B CN 112465273 B CN112465273 B CN 112465273B CN 202011560297 A CN202011560297 A CN 202011560297A CN 112465273 B CN112465273 B CN 112465273B
Authority
CN
China
Prior art keywords
vehicle
unmanned vehicle
vehicles
track
unmanned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011560297.9A
Other languages
Chinese (zh)
Other versions
CN112465273A (en
Inventor
杨正才
石川
周奎
姚胜华
张友兵
尹长城
冯樱
刘成武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Automotive Technology
Original Assignee
Hubei University of Automotive Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Automotive Technology filed Critical Hubei University of Automotive Technology
Priority to CN202011560297.9A priority Critical patent/CN112465273B/en
Publication of CN112465273A publication Critical patent/CN112465273A/en
Application granted granted Critical
Publication of CN112465273B publication Critical patent/CN112465273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an unmanned vehicle track prediction method based on a local attention mechanism, which takes the historical track of vehicles around an unmanned vehicle as input, and fully considers the influence of the interaction between the unmanned vehicle and adjacent vehicles on the future track of the unmanned vehicle; constructing space interaction between the unmanned vehicle and an adjacent vehicle according to the geometric structure of the road and the geometric shape of the vehicle, estimating a part of vehicles with higher correlation with the future track of the unmanned vehicle through a local attention mechanism, calculating the correlation between the part of vehicles and the unmanned vehicle, and constructing time interaction by weighting and summing the correlation; synthesizing time and space interaction between the unmanned vehicle and surrounding vehicles at the current moment, inputting the comprehensive interaction characteristics and then connecting a decoder of a full connection layer to obtain track distribution and track coordinates of the unmanned vehicle in a period of time in the future; and calculating loss by using a negative log-likelihood loss function during training, and predicting the track of the unmanned vehicle for a period of time in the future by using the trained model through loss back propagation updating parameters to assist the completion of subsequent decision planning.

Description

Unmanned vehicle track prediction method based on local attention mechanism
Technical Field
The invention belongs to the field of intelligent driving, and particularly relates to an unmanned vehicle track prediction method based on a local attention mechanism.
Background
In recent years, with the advent of intelligent driving booms, the use of artificial intelligence technology in automobiles has increased, especially for vehicles dedicated to developing pure unmanned driving. The track prediction technology for predicting the position of the automobile at the next second is the basis for realizing unmanned driving, subsequent actions can be carried out only by correctly predicting the future position of the automobile without causing danger to the automobile, if the fact that the automobile is about to leave the current lane is predicted, whether the actions cause the danger or not can be predicted in advance, and if the actions cause the danger, intervention interference can be carried out in advance.
Current vehicle trajectory prediction technologies can be divided into model-based trajectory prediction methods and neural network-based trajectory prediction methods; the model-based methods comprise methods based on a dynamic model, Kalman filtering and the like, and the methods are proved to have higher prediction precision only in a short time, however, once the prediction time span is increased, the prediction precision is greatly reduced; the method based on the neural network, such as RNN, LSTM, and the like, just solves the problem of the reduction of the track prediction precision in a long time span, and can still maintain satisfactory prediction precision in a long time span by fully mining the nonlinear relation in the historical information.
However, the current prediction algorithm based on the neural network only roughly considers the historical tracks of the unmanned vehicle at all the moments, and generates the hidden state vector of the last moment by using the historical tracks at all the moments; the hidden state vector is difficult to guarantee to contain important information of all historical moments, and important contents in the historical information are inevitably lost, so that the finally predicted track is often greatly deviated from the actual track of the vehicle. Therefore, historical information with the largest influence on the current track prediction must be extracted, historical information with smaller influence is ignored, and when the unmanned vehicle changes lanes, a human driver mainly observes the driving conditions of vehicles before and after a target lane to determine when the lane can be changed; this gives less weight to the vehicle information on the unmanned vehicle lane than the same weight to all vehicles. The vehicle track information most relevant to the unmanned vehicle motion at each moment is extracted by calculating the relevancy score of the historical information of other vehicles and the historical information of the unmanned vehicle, and the information is input into a neural model for track prediction, so that the calculation cost can be saved, and the precision of the track prediction can be improved.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to solve the problem that in the field of current track prediction, most methods only roughly consider historical track information of surrounding vehicles at all times, interaction between the surrounding vehicles and unmanned vehicles is not sufficiently excavated, and the track prediction precision is low.
In order to solve the problems, the invention adopts the technical scheme that the unmanned vehicle track prediction method based on the local attention mechanism comprises the following steps:
step 1: acquiring historical movement track information of an unmanned vehicle and surrounding vehicles, and preprocessing the track information;
step 2: constructing an environment tensor of the potential circle of the unmanned vehicle, and extracting spatial interaction between vehicles;
and step 3: extracting time interaction between vehicles by using a local attention mechanism model;
and 4, step 4: predicting future tracks of the unmanned vehicle by training an LSTM model of an Encoder-Decoder structure;
further, in the step 1, historical movement track information of the vehicle and surrounding vehicles is collected, and the track information is preprocessed, and the specific method includes:
the method comprises the steps of intercepting a video of a recording camera into pictures, calibrating each picture, detecting a vehicle in each picture by using a target detection algorithm, recording the geometric center position of the corresponding vehicle as the position coordinate of the current moment, giving ID numbers corresponding to the vehicle, a lane where the vehicle is located and the current frame to obtain historical track information of the vehicle, taking the frame number in the track information as a timestamp index, filtering and smoothing the coordinates, arranging processed data according to the ascending order of the timestamp, and dividing the data into a training set, a verification set and a test set according to the ratio of 7:1:2, thereby obtaining a data set for model training and verification.
Further, the method comprisesIn the step 2, a potential circle of the unmanned vehicle is constructed, and an empty tensor grid corresponding to the potential circle is constructed by comprehensively considering the length of the vehicle and the width of the road structure; filling hidden state vectors of the last time of historical observation of vehicles in a circle in a tensor grid position corresponding to the last position of the vehicle by judging whether vehicles around the current time are in a potential circle range limited by the unmanned vehicle or not, namely constructing an environment tensor of the potential circle of the unmanned vehicle after the hidden state vectors are finished, and extracting space interaction information between the vehicles at the current time after a convolution layer
Figure 100002_DEST_PATH_IMAGE001
Further, in step 3, the local attention mechanism model is used to extract time interaction vectors between surrounding vehicles and the unmanned vehicle, and the calculation method is as follows:
the hidden state vector of the unmanned vehicle at the current moment is used for solving the center position of the small window,
Figure 355984DEST_PATH_IMAGE002
determining the range of the small window according to the central position of the small window, and calculating the correlation degree of the hidden state vectors of the surrounding vehicles at the current moment and the hidden state vectors of the unmanned vehicles within the range of the small window
Figure DEST_PATH_IMAGE003
The calculated correlation degree and the corresponding hidden state vector of the surrounding vehicles are weighted and averaged to obtain the time interaction information between the vehicles at the current moment
Figure 352759DEST_PATH_IMAGE004
Further, in the step 4, the specific steps of predicting the future trajectory of the unmanned vehicle through an LSTM model of an Encoder-Decoder structure are as follows:
(1) inputting the track coordinates in the whole observation time into a full connection layer according to the historical track of the vehicle obtained in the step 1, and obtaining word embedded vectors of the track coordinates at all times
Figure DEST_PATH_IMAGE005
(2) Embedding words of the track coordinate of the current moment into a vector
Figure 80544DEST_PATH_IMAGE006
And the encoder hidden state vector at last moment
Figure DEST_PATH_IMAGE007
Inputting into LSTM encoder to obtain the encoder hidden state vector at the current moment
Figure 162769DEST_PATH_IMAGE008
As above, the encoder hidden state vector of all time track coordinates can be obtained
Figure DEST_PATH_IMAGE009
(3) The obtained encoder hidden state vectors of all vehicles at the current moment
Figure 368623DEST_PATH_IMAGE008
Substituting the potential circle constructed in the step 2 into a corresponding position in the empty tensor grid corresponding to the potential circle to obtain an environment tensor of the potential circle of the unmanned vehicle;
(4) according to the step 3, the hidden state vector of the unmanned vehicle at the current moment is used for solving the center position of the small window, the surrounding vehicle range with the highest correlation degree with the unmanned vehicle is determined, the correlation degrees are calculated with the hidden state vector of the unmanned vehicle one by one, and the time interaction vector is obtained through weighted average;
(5) combining the space and time interaction vectors contact obtained in the step 2 and the step 3 to obtain comprehensive interaction characteristics
Figure 491342DEST_PATH_IMAGE010
Will be
Figure DEST_PATH_IMAGE011
And the decoder hidden state vector at the last moment
Figure 390028DEST_PATH_IMAGE012
Input LSTM decoder, namely obtaining hidden state vector of unmanned vehicle decoder at current moment
Figure DEST_PATH_IMAGE013
I.e. by
Figure 959550DEST_PATH_IMAGE014
(6) Hiding the state vector of the unmanned vehicle decoder at the current moment
Figure 969094DEST_PATH_IMAGE013
After passing through the full connection layer, the probability distribution parameter of the predicted track at t +1 can be obtained
Figure DEST_PATH_IMAGE015
Wherein
Figure 206041DEST_PATH_IMAGE016
The predicted track coordinate of the unmanned vehicle at t +1 is obtained;
further, in step 4, training an LSTM model of an Encoder-Decoder structure, training the model to minimize a negative log-likelihood loss function as a target, performing back propagation according to an error of the loss function with respect to a process weight parameter, updating the process weight parameter by using a gradient descent algorithm, and storing the model weight parameter when the generalization capability of the trajectory prediction model is the best, thereby completing model training.
The invention provides an unmanned vehicle track prediction method based on a local attention mechanism, which comprehensively considers the interaction of an unmanned vehicle and surrounding vehicles on a spatial position and the dependency relationship on the surrounding vehicle time sequence, and predicts the future time track of the unmanned vehicle by utilizing an LSTM network with an 'Encode-Decoder' structure, starting from the aspect of improving the pre-judging capability of the vehicle, and acquiring the historical track information of the vehicle by a target identification algorithm, wherein the historical track information comprises a vehicle ID, a lane ID where the vehicle is located, an acquisition time ID and the geometric center position coordinate of the vehicle in a picture frame; constructing a vehicle space interaction tensor corresponding to the unmanned vehicle force circle based on a road structure and a vehicle geometric structure, encoding vehicle historical track information, filling encoded hidden state vectors in the unmanned vehicle force circle in a position corresponding to the tensor, and fully considering the position interaction of the road structure and the vehicle; calculating partial vehicles with highest correlation degree between the surrounding vehicle tracks and the unmanned vehicle tracks based on a local attention mechanism, calculating the correlation degree between the partial vehicles and the unmanned vehicle, and performing weighted summation to obtain time interaction information between the vehicles; and (4) integrating space and time interaction, inputting integrated interaction information into a decoder, and then connecting a full connection layer to obtain unmanned vehicle track distribution and track coordinates in a future period of time. Calculating the loss error of each training through a negative log-likelihood loss function, reversely propagating the error for derivation, and updating parameters through gradient descent to accelerate the convergence of model training; the generalization performance of the finally trained model is better, the prediction precision can keep better performance on different data sets, a basis is provided for the follow-up decision of the intelligent driving automobile, and the intelligent driving automobile can run more reliably in a complex traffic scene.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a diagram of a preprocessed data format;
FIG. 3 is a schematic diagram of construction of a force circle tensor of the unmanned vehicle;
FIG. 4 is a schematic diagram of a local attention mechanism extraction vehicle time interaction;
Detailed Description
The technical solutions of the present invention are further described below with reference to the accompanying drawings and specific embodiments, which are used only for facilitating the detailed understanding of the present invention by those skilled in the art, and are not intended to limit the scope of the present invention, and various modifications of equivalent forms of the present invention by those skilled in the art are included in the scope of the present invention defined by the appended claims.
A method for predicting a track of an unmanned vehicle based on a local attention mechanism is characterized in that in the running process of an intelligent driving vehicle, the running track of the unmanned vehicle in a future period is predicted through the historical running tracks of the unmanned vehicle and surrounding vehicles, and sufficient information is provided for subsequent planning and decision-making of the vehicle, so that traffic accidents of the vehicle caused by lane deviation are effectively avoided. As shown in fig. 1, the vehicle trajectory prediction method includes: the method comprises the steps of vehicle track information preprocessing, track information coding, construction of a space interaction vector of the unmanned vehicle and surrounding vehicles, construction of a time interaction vector of the unmanned vehicle and the surrounding vehicles through a local attention mechanism, track prediction output and model process parameter derivation and optimization.
The method comprises the following specific implementation processes:
A. preprocessing the acquired data;
a1, recording the information of the test vehicle on a section of road, and acquiring data by using a camera, wherein the initial data format is a video file containing the vehicle information. Intercepting a video file into pictures according to a sampling frequency of 10Hz, detecting vehicles in the images according to prior knowledge of target detection after each picture is calibrated, determining the geometric center position of the vehicles, extracting track information of the vehicles, namely local coordinates (x, y) of the vehicles at each moment, and recording the Frame _ ID, the corresponding Vehicle number Vehicle _ ID and the Lane number Lane _ ID where the vehicles are located at the current acquisition moment;
a2, the data format at this time is ". csv", processed into Dataframe format by the pandas library, and smoothed by the Savitzky-Golay filter;
a3, processing the Dataframe into 5 columns through a resize function, wherein the first column is a Vehicle number Vehicle _ ID, the second column is a Frame _ ID of the current acquisition time, the third and fourth columns are respectively an abscissa x and an ordinate y of a local coordinate system of the Vehicle, and the fifth column is a Lane number Lane _ ID where the Vehicle is located. Finally, the Frame _ ID is used as a time stamp, the processed data are arranged in an ascending order, and the final data processing effect is shown in FIG. 2;
B. encoding input data
B1, given the historical track of a vehicle at the moment t:
Figure DEST_PATH_IMAGE017
in which
Figure 869103DEST_PATH_IMAGE018
Representing the trajectories of i surrounding vehicles at the current time t, wherein the vehiclesThe observed length of the vehicle's historical track is
Figure DEST_PATH_IMAGE019
The historical observation range is from
Figure 801287DEST_PATH_IMAGE020
A driving track within the current time t;
b2, mapping all position coordinates of the ith vehicle in the historical observation length at the current time t to corresponding word embedding vectors through a full connection layer
Figure DEST_PATH_IMAGE021
Namely:
Figure 739156DEST_PATH_IMAGE022
wherein
Figure DEST_PATH_IMAGE023
In order to be a function of the full connection,
Figure 440396DEST_PATH_IMAGE024
is the weight of the full connection layer;
similarly, word embedding vectors corresponding to all position coordinates of the ith vehicle in the whole historical observation length can be obtained
Figure DEST_PATH_IMAGE025
Namely:
Figure 539939DEST_PATH_IMAGE026
b3 word embedding vector of ith vehicle at current time t
Figure 224998DEST_PATH_IMAGE021
And the last moment
Figure DEST_PATH_IMAGE027
Temporal encoder implicit state vector
Figure 435400DEST_PATH_IMAGE028
The coded implicit state vector of the ith vehicle around the unmanned vehicle at the current time t is obtained through the coding of an LSTM coder
Figure DEST_PATH_IMAGE029
Namely:
Figure 381359DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE031
the LSTM encoder is responsible for encoding the track information of each vehicle into an implicit state vector,
Figure 527169DEST_PATH_IMAGE032
is the weight of the encoder;
b4, executing the same word embedding and encoding operation on the position coordinates of each vehicle at all times, and obtaining the implicit state vectors of all vehicles. Wherein
Figure DEST_PATH_IMAGE033
And
Figure 558579DEST_PATH_IMAGE034
respectively encoding hidden state vectors of the ith vehicle and the unmanned vehicle at the current time t;
similarly, the encoder state vectors corresponding to all the position coordinates of the first vehicle and the unmanned vehicle in the whole historical observation length can be obtained
Figure DEST_PATH_IMAGE035
Namely:
Figure 307092DEST_PATH_IMAGE036
C. extracting interaction information between vehicles
C1, extracting the space interaction between the unmanned vehicle and the surrounding vehicles by using the convolution layer;
inputting historical tracks of all vehicles around the unmanned vehicle into an encoder of an LSTM model, and solving the encoding hidden state vector of all vehicles around the unmanned vehicle at the current time t based on the coordinate positions of all vehicles around the unmanned vehicle at the time t
Figure DEST_PATH_IMAGE037
Encoding hidden state vectors of all surrounding vehicles at the current time t
Figure 248504DEST_PATH_IMAGE037
By linear variation
Figure 955428DEST_PATH_IMAGE038
And constructing a tensor:
Figure DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure 349501DEST_PATH_IMAGE040
Figure DEST_PATH_IMAGE041
is a weight matrix learned by back propagation.
Figure 901705DEST_PATH_IMAGE042
Determining the position coordinates of the ith vehicle around the time t as an indication function
Figure DEST_PATH_IMAGE043
Whether the potential force circle of the unmanned vehicle is within the potential force circle range of the unmanned vehicle at the moment t, and if and only if the potential force circle of the unmanned vehicle is within the potential force circle range of the unmanned vehicle as shown in figure 3
Figure 291098DEST_PATH_IMAGE043
In that
Figure 309869DEST_PATH_IMAGE044
The indication function is 1 when the force ring is within the range, otherwise, the indication function is 10; wherein
Figure DEST_PATH_IMAGE045
The vehicle is a set of vehicles around the unmanned vehicle at the time t;
the range of the potential force circle of the unmanned vehicle is defined as the coordinate of the unmanned vehicle at the moment t
Figure 191238DEST_PATH_IMAGE046
As a central origin, the transverse coordinates fall within [ -4.5m,4.5m]Within the interval, the longitudinal coordinate is within [ -20m,30m]A rectangular area within the interval; since the length of the vehicle is about 5.5m, the width of the lane is about 3m, and the construction dimension is [9,3 ]]Tensor (A)
Figure DEST_PATH_IMAGE047
Tensor thus constructed
Figure 547133DEST_PATH_IMAGE047
The spatial position information and the road structure information among vehicles on the road can be well reserved;
in unmanned vehicle
Figure 463136DEST_PATH_IMAGE048
Tensor at location
Figure DEST_PATH_IMAGE049
Convolution operation is carried out, and a convolution filter acts on H, so that the interactive vector of the space position information of the unmanned vehicle and the surrounding vehicles at the current moment can be obtained
Figure 980705DEST_PATH_IMAGE050
Figure DEST_PATH_IMAGE051
Wherein the content of the first and second substances,
Figure 208424DEST_PATH_IMAGE052
the size of the filter for the current convolution operation;
c2, extracting time interaction between the unmanned vehicle and surrounding vehicles by using the local attention mechanism model;
setting a local window, wherein the window possibly comprises coded hidden state vectors of historical tracks of all the i surrounding vehicles at the current time t and vehicles with the highest correlation degree of the unmanned vehicles;
according to the coded hidden state vector of the unmanned vehicle at the time t
Figure DEST_PATH_IMAGE053
Finding the central position of the local small window
Figure 243376DEST_PATH_IMAGE054
Figure DEST_PATH_IMAGE055
Wherein
Figure 867081DEST_PATH_IMAGE056
The hidden state vector lengths of all i vehicles around the unmanned vehicle at the time t,
Figure DEST_PATH_IMAGE057
Figure 821131DEST_PATH_IMAGE058
is a parameter matrix learned by training;
as shown in FIG. 4, the center position of the local window is determined according to the obtained position
Figure DEST_PATH_IMAGE059
The window size can be determined to be in the range
Figure 677091DEST_PATH_IMAGE060
I.e. only considering the hidden vector values of all i vehicles around the time t within the window
Figure DEST_PATH_IMAGE061
. Wherein D is an integer, and the value is determined according to the actual condition;
the hidden state vector of the ith vehicle around the window at the time t
Figure 781314DEST_PATH_IMAGE062
Implicit state vectors of the unmanned vehicles at the time t one by one
Figure DEST_PATH_IMAGE063
And (3) carrying out correlation calculation, and scoring the correlation between the i-th hidden state vector of the vehicle around each moment in the window and the hidden state vector of the unmanned vehicle at the moment t, wherein the specific calculation formula is as follows:
Figure 265385DEST_PATH_IMAGE064
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE065
the correlation score between the hidden state vector of the ith vehicle around the small window at the time t and the hidden state vector of the unmanned vehicle is obtained; to make the correlation score follow the central position
Figure 531281DEST_PATH_IMAGE066
The distance between the two is enlarged and reduced, a Gaussian distribution product factor is added after the sigmoid operation, and the mean value of the Gaussian distribution product factor is
Figure DEST_PATH_IMAGE067
Standard deviation of
Figure 733592DEST_PATH_IMAGE068
Here, the correlation evaluation function
Figure DEST_PATH_IMAGE069
Wherein
Figure 375926DEST_PATH_IMAGE070
For intermediate transition matrix, harmonize
Figure DEST_PATH_IMAGE071
And
Figure 245662DEST_PATH_IMAGE072
normally performing matrix operation;
all encoder hidden state vectors of i surrounding vehicles in a small window at the moment t
Figure DEST_PATH_IMAGE073
The hidden state vectors of the unmanned vehicle encoder at the moment one by one
Figure 682459DEST_PATH_IMAGE074
Calculating a relevancy score; and the respective relevancy scores are related to the corresponding encoder hidden state vectors
Figure DEST_PATH_IMAGE075
Multiplying and calculating a weighted average to obtain a time interaction relation vector between the unmanned vehicle and the ith surrounding workshop at the time t
Figure 513012DEST_PATH_IMAGE076
Namely:
Figure DEST_PATH_IMAGE077
c3, connecting the space interactive features of the surrounding vehicles extracted at the time t with the time interactive features to form the comprehensive interactive features of the unmanned vehicles and the surrounding vehicles at the time t
Figure 818092DEST_PATH_IMAGE078
Figure DEST_PATH_IMAGE079
D. Decoder output
D1 comprehensive interaction characteristics based on unmanned vehicle and surrounding vehicles at time t
Figure 152121DEST_PATH_IMAGE078
And the output of the unmanned vehicle decoder at the last time
Figure 150033DEST_PATH_IMAGE080
The hidden state vector of the unmanned vehicle decoder at the time t can be obtained by being taken as the input of an LSTM decoder
Figure DEST_PATH_IMAGE081
I.e. by
Figure 202303DEST_PATH_IMAGE082
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE083
LSTM decoder weights obtained for network back propagation;
E. predicted trajectory output
E1, obtaining hidden state vector of unmanned vehicle decoder at t moment
Figure 45494DEST_PATH_IMAGE084
The probability distribution parameter of the predicted track Y at t +1 can be obtained through a full connection layer
Figure DEST_PATH_IMAGE085
I.e. by
Figure 765188DEST_PATH_IMAGE086
The calculation formula of the conditional probability distribution parameter (c) is as follows:
Figure DEST_PATH_IMAGE087
wherein the content of the first and second substances,
Figure 543788DEST_PATH_IMAGE088
as a function of the full link layer,
Figure DEST_PATH_IMAGE089
the weight of the full connection layer is obtained through network training;
Figure 207988DEST_PATH_IMAGE090
wherein
Figure DEST_PATH_IMAGE091
Is the average value of the coordinate distribution of the coordinate track at the t +1 moment predicted by the unmanned vehicle at the t moment,
Figure 730236DEST_PATH_IMAGE092
the standard deviation of the coordinate distribution of the coordinate track at the t +1 moment predicted by the unmanned vehicle at the t moment,
Figure DEST_PATH_IMAGE093
and (3) a covariance coefficient of coordinate distribution of the coordinate at the t +1 moment predicted by the unmanned vehicle at the t moment. Therefore, the coordinate trajectory coordinate at the t +1 moment predicted by the unmanned vehicle at the t moment is as follows:
Figure 429070DEST_PATH_IMAGE094
F. model process parameter derivation and optimization
F1, dividing the data set into a training set, a verification set and a test set according to the proportion of 7:1:2, and continuously verifying the trained model by using the verification set to ensure the performance consistency of the model on the training set and the verification set;
f2, minimizing the negative log-likelihood loss function in the training process, and obtaining the corresponding model process weight and bias when the negative log-likelihood loss function is minimum through back propagation updating parameters, wherein the loss function is as follows:
Figure DEST_PATH_IMAGE095

Claims (4)

1. an unmanned vehicle track prediction method based on a local attention mechanism is characterized by comprising the following steps:
step 1: acquiring historical motion track information of an unmanned vehicle and surrounding vehicles, and preprocessing the track information;
step 2: constructing an environment tensor of the potential circle of the unmanned vehicle, and extracting space interaction between vehicles;
and step 3: extracting time interaction between vehicles by using a Local attribute model;
and 4, step 4: predicting future tracks of the unmanned vehicle by training an LSTM model of an Encoder-Decoder structure;
in the step 2, firstly, a real force circle of the unmanned vehicle is constructed, and an empty tensor grid corresponding to the potential force circle is constructed by comprehensively considering the length of the vehicle and the width of the road structure; filling hidden state vectors of the last historical observation time of the vehicles in the circle in tensor grid positions corresponding to the last positions of the vehicles by judging whether the vehicles around the current time are in the potential circle range limited by the unmanned vehicles or not; after that, an unmanned vehicle potential circle environment tensor is constructed, and space interaction information among vehicles at the current moment is extracted through a convolution layer
Figure DEST_PATH_IMAGE001
;
In the step 3, a Local attribute model is used for extracting time interaction vectors of surrounding vehicles and the unmanned workshop, and the calculation method is as follows:
the hidden state vector of the unmanned vehicle at the current moment is utilized to calculate the center position of the small window,
Figure 174282DEST_PATH_IMAGE002
determining the range of the small window according to the central position of the small window, and calculating the correlation degree of the hidden state vectors of the surrounding vehicles at the current moment and the hidden state vectors of the unmanned vehicles within the range of the small window
Figure 294685DEST_PATH_IMAGE003
The calculated correlation degree and the corresponding hidden state vector of the surrounding vehicles are weighted and averaged to obtain the time interaction information between the vehicles at the current moment
Figure 667898DEST_PATH_IMAGE004
2. The unmanned aerial vehicle track prediction method based on the local attention mechanism as claimed in claim 1, wherein in the step 1, historical motion track information of the vehicle and surrounding vehicles is collected, and the track information is preprocessed, and the specific method is as follows:
capturing a video of a recording camera into pictures, calibrating each picture, detecting a vehicle in each picture by using a target detection algorithm, recording the geometric center position of the corresponding vehicle as the position coordinate of the current moment, and giving ID numbers corresponding to the vehicle, a lane where the vehicle is located and the current frame to obtain the historical track information of the vehicle; and taking the frame number in the track information as a timestamp index, filtering and smoothing the coordinate, arranging the processed data in an ascending order according to the timestamp, and dividing the data into a training set, a verification set and a test set according to the ratio of 7:1:2, thereby obtaining a data set for model training and verification.
3. The unmanned aerial vehicle trajectory prediction method based on the local attention mechanism as claimed in claim 1, wherein in the step 4, the specific steps of predicting the future trajectory of the unmanned aerial vehicle through an LSTM model of an Encoder-Decoder structure are as follows:
(1) inputting the track coordinates in the whole observation time into a full connection layer according to the historical track of the vehicle obtained in the step 1, and obtaining word embedded vectors of the track coordinates at all times
Figure 63107DEST_PATH_IMAGE005
(2) Embedding words of the track coordinate of the current moment into a vector
Figure 143058DEST_PATH_IMAGE006
And the encoder hidden state vector at last moment
Figure 434362DEST_PATH_IMAGE007
Inputting into LSTM encoder to obtain the hidden state vector of encoder at current moment
Figure 29292DEST_PATH_IMAGE008
In the same way, the locus at all times can be obtainedTarget encoder hidden state vector
Figure 493771DEST_PATH_IMAGE009
(3) The obtained encoder hidden state vectors of all vehicles at the current moment
Figure 365912DEST_PATH_IMAGE008
Substituting the potential circle constructed in the step 2 into a corresponding position in the empty tensor grid corresponding to the potential circle to obtain an environment tensor of the potential circle of the unmanned vehicle;
(4) according to the step 3, the hidden state vector of the unmanned vehicle at the current moment is used for solving the center position of the small window, the surrounding vehicle range with the highest correlation degree with the unmanned vehicle is determined, the correlation degrees are calculated with the hidden state vector of the unmanned vehicle one by one, and the time interaction vector is obtained through weighted average;
(5) combining the space and time interaction vectors contact obtained in the step 2 and the step 3 to obtain comprehensive interaction characteristics
Figure 624855DEST_PATH_IMAGE010
Will be
Figure 707081DEST_PATH_IMAGE011
And the decoder hidden state vector at the last moment
Figure 912934DEST_PATH_IMAGE012
Inputting into LSTM decoder to obtain hidden state vector of current unmanned vehicle decoder
Figure 701899DEST_PATH_IMAGE013
I.e. by
Figure 459639DEST_PATH_IMAGE014
(6) Hiding the state vector of the unmanned vehicle decoder at the current moment
Figure 701264DEST_PATH_IMAGE013
After passing through the full connection layer, the probability distribution parameter of the predicted track at t +1 can be obtained
Figure 445230DEST_PATH_IMAGE015
Wherein
Figure 354280DEST_PATH_IMAGE016
Namely the predicted track coordinate of the unmanned vehicle at t + 1.
4. The unmanned aerial vehicle trajectory prediction method based on the local attention mechanism as claimed in claim 3, wherein in step 4, an LSTM model of an Encoder-Decoder structure is trained, the model training aims at minimizing a negative log likelihood loss function, back propagation is performed according to an error of the loss function relative to a process weight parameter, the process weight parameter is updated by using a gradient descent algorithm, the model weight parameter when the trajectory prediction model has the best generalization capability is stored, and the model training is completed.
CN202011560297.9A 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism Active CN112465273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011560297.9A CN112465273B (en) 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011560297.9A CN112465273B (en) 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism

Publications (2)

Publication Number Publication Date
CN112465273A CN112465273A (en) 2021-03-09
CN112465273B true CN112465273B (en) 2022-05-31

Family

ID=74803884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011560297.9A Active CN112465273B (en) 2020-12-25 2020-12-25 Unmanned vehicle track prediction method based on local attention mechanism

Country Status (1)

Country Link
CN (1) CN112465273B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949597B (en) * 2021-04-06 2022-11-04 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN113313320B (en) * 2021-06-17 2022-05-31 湖北汽车工业学院 Vehicle track prediction method based on residual attention mechanism
CN113362367B (en) * 2021-07-26 2021-12-14 北京邮电大学 Crowd trajectory prediction method based on multi-precision interaction
CN113570595B (en) * 2021-08-12 2023-06-20 上汽大众汽车有限公司 Vehicle track prediction method and optimization method of vehicle track prediction model
CN114372116B (en) * 2021-12-30 2023-03-21 华南理工大学 Vehicle track prediction method based on LSTM and space-time attention mechanism

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086869A (en) * 2018-07-16 2018-12-25 北京理工大学 A kind of human action prediction technique based on attention mechanism
CN110276439A (en) * 2019-05-08 2019-09-24 平安科技(深圳)有限公司 Time Series Forecasting Methods, device and storage medium based on attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200324794A1 (en) * 2020-06-25 2020-10-15 Intel Corporation Technology to apply driving norms for automated vehicle behavior prediction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086869A (en) * 2018-07-16 2018-12-25 北京理工大学 A kind of human action prediction technique based on attention mechanism
CN110276439A (en) * 2019-05-08 2019-09-24 平安科技(深圳)有限公司 Time Series Forecasting Methods, device and storage medium based on attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Kaouther Messaoud,etc.Attention Based Vehicle Trajectory Prediction.《IEEE Transactions on Intelligent Vehicles》.2020,1-11. *
刘创等.基于注意力机制的车辆运动轨迹预测.《浙江大学学报(工学版)》.2020,(第06期), *

Also Published As

Publication number Publication date
CN112465273A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN112465273B (en) Unmanned vehicle track prediction method based on local attention mechanism
CN110705457B (en) Remote sensing image building change detection method
US11783594B2 (en) Method of segmenting pedestrians in roadside image by using convolutional network fusing features at different scales
EP4152204A1 (en) Lane line detection method, and related apparatus
EP3633615A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN111382686B (en) Lane line detection method based on semi-supervised generation confrontation network
CN109145836B (en) Ship target video detection method based on deep learning network and Kalman filtering
CN112183635A (en) Method for realizing segmentation and identification of plant leaf lesions by multi-scale deconvolution network
CN112733800B (en) Remote sensing image road information extraction method and device based on convolutional neural network
CN112465199B (en) Airspace situation assessment system
CN111681259B (en) Vehicle tracking model building method based on Anchor mechanism-free detection network
CN113076599A (en) Multimode vehicle trajectory prediction method based on long-time and short-time memory network
CN110009648A (en) Trackside image Method of Vehicle Segmentation based on depth Fusion Features convolutional neural networks
CN115223063A (en) Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning
CN114152257A (en) Ship prediction navigation method based on attention mechanism and environment perception LSTM
CN115690153A (en) Intelligent agent track prediction method and system
CN113065431A (en) Human body violation prediction method based on hidden Markov model and recurrent neural network
CN113313320B (en) Vehicle track prediction method based on residual attention mechanism
CN116109986A (en) Vehicle track extraction method based on laser radar and video technology complementation
CN115205855A (en) Vehicle target identification method, device and equipment fusing multi-scale semantic information
CN112597996B (en) Method for detecting traffic sign significance in natural scene based on task driving
CN114241314A (en) Remote sensing image building change detection model and algorithm based on CenterNet
CN113989287A (en) Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN115661786A (en) Small rail obstacle target detection method for area pre-search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant