CN112257850B - Vehicle track prediction method based on generation countermeasure network - Google Patents

Vehicle track prediction method based on generation countermeasure network Download PDF

Info

Publication number
CN112257850B
CN112257850B CN202011157093.0A CN202011157093A CN112257850B CN 112257850 B CN112257850 B CN 112257850B CN 202011157093 A CN202011157093 A CN 202011157093A CN 112257850 B CN112257850 B CN 112257850B
Authority
CN
China
Prior art keywords
network
track
discrimination
predicted
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011157093.0A
Other languages
Chinese (zh)
Other versions
CN112257850A (en
Inventor
周毅
周丹阳
胡姝婷
李伟
张延宇
杜晓玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202011157093.0A priority Critical patent/CN112257850B/en
Publication of CN112257850A publication Critical patent/CN112257850A/en
Application granted granted Critical
Publication of CN112257850B publication Critical patent/CN112257850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method comprises the steps of firstly extracting historical track data of a target vehicle and historical track data of vehicles around the target vehicle, constructing and generating a confrontation network model, sequentially inputting the extracted track data into a generation network according to a time sequence to obtain a predicted track value, alternately inputting the predicted track value and a real track value into a discrimination network to discriminate the difference between the predicted track value and the real track value to obtain a discrimination probability, inputting the discrimination probability into the generation network and the discrimination network to obtain loss values of the two networks, updating parameters of the discrimination network and the generation network in a reverse mode until the discrimination probability output by the discrimination network is close to 1, generating the confrontation network model to be trained well, adding an attention machine into the generation network model, calculating the correlation between the concealed state and the current prediction time by taking hidden state information of an encoder into consideration at each decoding time of a decoder, obtaining an input code most correlated with the layer state of the current prediction time, and improving the accuracy of the predicted track.

Description

Vehicle track prediction method based on generation countermeasure network
Technical Field
The invention belongs to the technical field of prediction of peripheral vehicle tracks of unmanned vehicles, and particularly relates to a vehicle track prediction method based on a generation countermeasure network.
Background
The unmanned automobile is a comprehensive intelligent system integrating functions of environmental perception, planning decision, multi-level auxiliary driving and the like. The complete unmanned automobile senses surrounding environment information through a sensor, and a decision system is used for replacing a human brain to analyze the current situation and make a reasonable decision so as to control the automobile to execute a corresponding unit. With the development of computer technology, unmanned vehicles are applied to more and more fields, have the characteristics of wide sensing range and no fatigue, can greatly reduce the occurrence of lower traffic accidents, and improve the traffic efficiency of cities.
Although unmanned vehicles have achieved great success in many areas, safety issues are a key issue that has been studied. Since the unmanned vehicle inevitably interacts with surrounding participants in the driving process, the unmanned vehicle without prediction capability is cautious to drive on a highway, and in recent years, the traffic accidents caused by the unmanned vehicle are caused by wrong understanding of the surrounding environment by the unmanned vehicle. Accurate prediction of the trajectory of the surrounding vehicle is an important prerequisite for safer driving of unmanned vehicles and for high quality decision-making and planning. By predicting future travel trajectories using previous observations of surrounding vehicles, the predicted trajectories of surrounding vehicles can be used to plan the movement of the unmanned vehicle to avoid collisions with surrounding vehicles.
The traditional track prediction models comprise a hidden Markov model, a Bayesian model and a Kalman filter, but the models are provided with more constraint conditions and parameters, so that the historical track information cannot be fully utilized, and the fitting effect is poor. The track prediction model also adopts a neural network for prediction, due to the limitation of a network structure, the track prediction can not be carried out on a longer sequence, and the prediction position at a single moment has little significance for the subsequent decision and motion planning of the unmanned vehicle. In order to solve the defects and shortcomings of the traditional method and the method combined with the neural network in the field of vehicle trajectory prediction at present, it is necessary to consider the prediction of a target vehicle time series and the interaction behavior of surrounding vehicles to a target vehicle in the vehicle trajectory prediction process.
Disclosure of Invention
The invention aims to provide a vehicle track prediction method based on a generation countermeasure network, which is used for improving the accuracy of vehicle track prediction.
The technical scheme for solving the technical problem of the invention is as follows: a vehicle track prediction method based on a generation countermeasure network comprises the following steps:
s1: preprocessing data in the NGSIM data set;
s2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
s3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
s4: constructing and generating a confrontation network model through a generator network and a discrimination network, and training to generate the confrontation network model;
s5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained antagonistic network model, and predicting to obtain the future track coordinates of the vehicle.
The step S1 specifically includes:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on lanes 2, 3 and 4, and selecting a transverse position, a longitudinal position and a speed in the vehicle data as track characteristics.
S1.3: extracting a target vehicle at t 1 ~t 1 The track sequence in + n time is
Figure BDA0002743110690000021
Wherein,
Figure BDA0002743110690000022
target vehicle and surrounding vehicles of the target vehicle at current t 1 Sets of trajectory characteristics of time of day, i.e.
Figure BDA0002743110690000023
Figure BDA0002743110690000024
Indicating that the target vehicle is at t 1 The lateral position of the moment of time,
Figure BDA0002743110690000025
indicates that the target vehicle is at t 1 The longitudinal position of the moment of time,
Figure BDA0002743110690000026
indicating that the target vehicle is at t 1 The speed of the moment in time is,
Figure BDA0002743110690000027
indicating that the target vehicle and the surrounding vehicle are at t 1 The difference in the lateral distance at the time of day,
Figure BDA0002743110690000028
for the target vehicle and the surrounding vehicles at t 1 The difference in the longitudinal distance at the time of day,
Figure BDA0002743110690000029
for surrounding vehicles at t relative to the target vehicle 1 The speed of the moment.
The step S2 specifically comprises the following steps:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
Figure BDA0002743110690000031
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each moment
Figure BDA0002743110690000032
Extracting historical hidden states obtained by an encoder and recording the historical hidden states as a historical hidden state vector set
Figure BDA0002743110690000033
S2.3: adding attention mechanism before decoder decodes, and knowing hidden state of last moment in decoder
Figure BDA0002743110690000034
(it is important to point out that in the present invention
Figure BDA0002743110690000035
Subscript t of (1) 2 And with
Figure BDA0002743110690000036
Subscript t of (1) 1 +1 denotes a different concept, t 1 Representing respective times, t, in the encoder 2 Representing each time instant in the decoder), the similarity between the hidden state of the decoder at the last time instant and the hidden state vector of the historical track can be obtained
Figure BDA0002743110690000037
Wherein;
s2.4: to obtain e t' Are normalized, i.e.
Figure BDA0002743110690000038
S2.5: s 'after normalization' t Weighted summation of the value and the hidden state of the historical track is obtained to obtain the value of the decoder at t 2 Input code at time +1
Figure BDA0002743110690000039
S2.6: will be provided with
Figure BDA00027431106900000310
And with
Figure BDA00027431106900000311
The vector passes through a decoder and is output to obtain a predicted time t 2 A value of +1, i.e.
Figure BDA00027431106900000312
Where w is the weight of the decoder.
Figure BDA00027431106900000313
At t for generating a network 2 Hidden state at time +1, hidden layer state of decoder
Figure BDA00027431106900000314
Obtaining the track of the current predicted time through mapping
Figure BDA00027431106900000315
The step S3 specifically comprises the following steps:
and inputting the J predicted tracks and the real tracks into a discrimination network alternately, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, the label of the predicted track is recorded as 0, and the discrimination probability is obtained.
The step S4 specifically comprises the following steps:
s4.1: the loss function for constructing the generated network is:
Figure BDA00027431106900000316
wherein J represents the number of input predicted tracks,
Figure BDA00027431106900000317
and representing the discrimination probability of the jth predicted track in the discrimination network.
Figure BDA00027431106900000318
Representing the euclidean distance between the predicted trajectory values and the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discriminative network is:
Figure BDA0002743110690000041
wherein,
Figure BDA0002743110690000042
and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the calculation of the judgment probability obtained by the judgment network for the predicted track is close to 1, the judgment network does not distinguish the predicted track from the real track, namely, the training of the generation network and the judgment network is completed.
The invention has the beneficial effects that: the present invention first extracts historical trajectory data (lateral position, longitudinal position, speed) of a target vehicle, and historical trajectory data (lateral distance with respect to the target vehicle, longitudinal distance with respect to the target vehicle, speed with respect to the target vehicle) of surrounding vehicles (front, left, and right vehicles located in the target vehicle) of the target vehicle. Then constructing a generation confrontation network model, inputting the extracted track data into the generation network in sequence according to a time sequence to obtain a predicted track value, alternately inputting the predicted track value and a real track value into a discrimination network, discriminating the difference between the predicted track value and the real track value by the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the generation network discrimination network to obtain loss values of the two networks, and updating parameters of the discrimination network and the generation network in a reverse direction until the discrimination probability output by the discrimination network is close to 1, which indicates that the training of the generation confrontation network model is mature. The invention adds an attention mechanism in the generation of a network model, and solves the problem that the traditional decoder only uses fixed intermediate variable prediction to cause the loss of important information for long sequence prediction. The decoder considers the hidden state information of the encoder at each decoding moment, calculates the correlation between the hidden state information and the current prediction moment hidden state, obtains the input code most correlated to the current prediction moment hidden state, and improves the accuracy of the prediction track.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a model architecture diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As shown in fig. 1, the present invention comprises the steps of:
s1: preprocessing data in the NGSIM data set;
the step S1 specifically includes:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on 2, 3 and 4 lanes, and selecting the transverse position, the longitudinal position and the speed in the vehicle data as track characteristics.
In step S1.2, it may be set that the unmanned vehicle travels on 3 lanes, the target vehicle is a vehicle in front of and closest to the unmanned vehicle, the surrounding vehicles of the target vehicle are front vehicles of the target vehicle on the same lane, and left and right vehicles on left and right lanes of the target vehicle. The left vehicle is the vehicle with the 2 lanes closest to the target vehicle, and the right vehicle is the vehicle with the 4 lanes closest to the target vehicle.
S1.3: extracting the target vehicle at t 1 ~t 1 Track sequence in + n time is
Figure BDA0002743110690000051
Wherein,
Figure BDA0002743110690000052
at the current time t for the target vehicle and surrounding vehicles of the target vehicle 1 Sets of trajectory characteristics of time of day, i.e.
Figure BDA0002743110690000053
Figure BDA0002743110690000054
Indicates that the target vehicle is at t 1 The lateral position of the moment of time,
Figure BDA0002743110690000055
indicates that the target vehicle is at t 1 The longitudinal position of the moment of time,
Figure BDA0002743110690000056
indicates that the target vehicle is at t 1 The speed of the moment in time is,
Figure BDA0002743110690000057
indicates that the target vehicle and the surrounding vehicles are at t 1 The difference in the lateral distance at the time of day,
Figure BDA0002743110690000058
for the target vehicle and the surrounding vehicles at t 1 The difference in the longitudinal distance at the time of day,
Figure BDA0002743110690000059
for surrounding vehicles at t relative to the target vehicle 1 The speed of the moment.
S2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
the step S2 specifically comprises the following steps:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
Figure BDA00027431106900000510
In step S2.1, produceAn encoder and decoder for a network comprising fully connected layers and LSTM, and attention mechanism, will
Figure BDA0002743110690000061
Firstly, inputting the data into a full connection layer, and outputting the data from the full connection layer to obtain a fixed-length characteristic space sequence received by a network
Figure BDA0002743110690000062
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each moment
Figure BDA0002743110690000063
Extracting historical hidden states obtained by the encoder and recording the historical hidden states as historical hidden state vector set
Figure BDA0002743110690000064
Step S2.2, inputting the characteristic space sequence L into an encoder for encoding, initializing the initial hidden state and context vector of the encoder, and inputting the corresponding step length track sequence point L output by the full connection layer into each LSTM unit t Each LSTM network module comprises a forgetting gate, an input gate and an output gate, the beginning part of the module corresponds to the forgetting gate, and the updating formula of each track point corresponding to the forgetting gate is as follows: f. of t =σ(w 11 L t +w 12 h t-1 +b f ) Where σ is sigmoid function
Figure BDA0002743110690000065
f t To forget the output of the gate, w 11 、w 12 Weight vector for forgetting gate, L t Is an input value of the current time, h t-1 Hidden state at the previous moment, b f To forget the biasing of the door.
The input information is subjected to forgetting gate to obtain a value between (0, 1), the middle part of the module is an input gate, and the update formula of the input gate is i t =σ(w 21 L t +w 22 h t-1 +b i ) Wherein w is 21 、w 22 As weight vector of input gate, b i To input the gate bias, the cell state update equation:
Figure BDA0002743110690000066
where tanh is the activation function of the input gate. w is a 31 、w 32 Weight vector of tanh layer, b c Is an offset.
The right part of the module is an output gate, and the formula for updating the output gate is as follows: o. o t =σ(w 41 L t +w 42 h t-1 +b 0 ) Wherein w is 41 、w 42 As weight vectors of output gates, b 0 For output gate biasing, the hidden state update formula is:
Figure BDA0002743110690000067
hidden state h output by LSTM unit of each layer t And a cell unit c t Passing to next LSTM unit, extracting all historical hidden state vector sets in encoder
Figure BDA0002743110690000068
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoder
Figure BDA0002743110690000071
(it is important to point out that in the present invention
Figure BDA0002743110690000072
Subscript t of (1) 2 And with
Figure BDA0002743110690000073
Subscript t of (1) 1 +1 denotes a different concept, t 1 Representing respective time instants, t, in the encoder 2 Representing each time in the decoder), the previous time hidden state and the historical track hidden state can be obtainedSimilarity of hidden state vectors
Figure BDA0002743110690000074
Wherein;
s2.4: will obtain e t' Are normalized, i.e.
Figure BDA0002743110690000075
S2.5: normalizing s' t Weighted summation of the value and the hidden state of the historical track is obtained to obtain the value of the decoder at t 2 Input coding at +1 time
Figure BDA0002743110690000076
S2.6: will be provided with
Figure BDA0002743110690000077
And with
Figure BDA0002743110690000078
The vector passes through a decoder and is output to obtain a predicted time t 2 A value of +1, i.e.
Figure BDA0002743110690000079
Where w is the weight of the decoder.
Figure BDA00027431106900000710
For generating networks at t 2 Hidden state at time +1, hidden layer state of decoder
Figure BDA00027431106900000711
Obtaining the track of the current predicted time through mapping
Figure BDA00027431106900000712
S3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
the step S3 specifically includes:
and alternately inputting the J predicted tracks and the real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, and the label of the predicted track is recorded as 0, so that the discrimination probability is obtained.
The detailed process of the step S3 comprises the following steps:
alternately inputting J predicted tracks and real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLP networks, the predicted tracks and the real tracks are changed from multi-dimension to one-dimension through MLP, the labels of the real tracks are recorded as 1, the labels of the predicted tracks are recorded as 0, the real tracks and the predicted tracks are alternately input into the discrimination network to obtain discrimination probability, and the formula for constructing the discrimination probability is as follows:
Figure BDA00027431106900000713
wherein, w m1 Is the weight of the first layer MLP, b m1 Is the bias of the MLP for that layer. Will obtain
Figure BDA00027431106900000714
And inputting the trajectory into the second layer MLP to obtain the final discrimination probability of the trajectory. Namely that
Figure BDA00027431106900000715
Wherein w m2 Is the weight of the MLP for that layer, b m2 For the biasing of the MLP of this layer,
Figure BDA00027431106900000716
and i represents the discrimination probability obtained by the track, and i represents the label of the track.
S4: constructing and generating a confrontation network model through a generator network and a discriminator network, and training and generating the confrontation network model;
the step S4 specifically comprises the following steps:
s4.1: the loss function for constructing the generated network is:
Figure BDA0002743110690000081
wherein J represents the number of input predicted tracks,
Figure BDA0002743110690000082
and representing the judgment probability of the j-th predicted track in the judgment network.
Figure BDA0002743110690000083
Representing the euclidean distance between the predicted trajectory values and the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discrimination network is:
Figure BDA0002743110690000084
wherein,
Figure BDA0002743110690000085
and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the calculation of the discrimination probability obtained by the discrimination network for the predicted track is close to 1, the discrimination network cannot distinguish the predicted track from the real track, namely, the training of the generation network and the discrimination network is completed.
S5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained antagonistic network model, and predicting to obtain the future track coordinates of the vehicle.
The present invention first extracts the historical trajectory data (lateral position, longitudinal position, speed) of the target vehicle, the historical trajectory data (lateral distance with respect to the target vehicle, longitudinal distance with respect to the target vehicle, speed with respect to the target vehicle) of the surrounding vehicles (front vehicle, left vehicle, right vehicle located in the target vehicle) of the target vehicle by the present invention. Then constructing a generation confrontation network model, inputting the extracted track data into the generation network in sequence according to a time sequence to obtain a predicted track value, alternately inputting the predicted track value and a real track value into a discrimination network, discriminating the difference between the predicted track value and the real track value by the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the generation network and the discrimination network to obtain loss values of the two networks, and updating parameters of the discrimination network and the generation network in a reverse manner until the discrimination probability output by the discrimination network is close to 1, which indicates that the generation confrontation network model is mature in training. The invention adds an attention mechanism in the generation of a network model, and solves the problem that the traditional decoder only uses fixed intermediate variable prediction to cause the loss of important information for long sequence prediction. The decoder considers the hidden state information of the encoder at each decoding moment, calculates the correlation between the hidden state information and the current prediction moment hidden state, obtains the input code most correlated to the current prediction moment hidden state, and improves the prediction accuracy.

Claims (4)

1. A vehicle track prediction method based on a generation countermeasure network is characterized by comprising the following steps:
s1: preprocessing data in the NGSIM data set; the step S1 specifically includes:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: selecting track data on lanes 2, 3 and 4, and selecting transverse position, longitudinal position and speed in vehicle data as track characteristics;
s1.3: extracting a target vehicle at t 1 ~t 1 The track sequence in + n time is
Figure FDA0003798658330000011
Wherein,
Figure FDA0003798658330000012
target vehicle and surrounding vehicles of the target vehicle at current t 1 Sets of trajectory characteristics of time of day, i.e.
Figure FDA0003798658330000013
Figure FDA0003798658330000014
Indicates that the target vehicle is at t 1 The lateral position of the moment of time,
Figure FDA0003798658330000015
indicates that the target vehicle is at t 1 The longitudinal position of the moment of time,
Figure FDA0003798658330000016
indicating that the target vehicle is at t 1 The speed of the moment in time is,
Figure FDA0003798658330000017
indicating that the target vehicle and the surrounding vehicle are at t 1 The difference in the lateral distance at the time of day,
Figure FDA0003798658330000018
for the target vehicle and the surrounding vehicles at t 1 The difference in the longitudinal distance at the time of day,
Figure FDA0003798658330000019
for surrounding vehicles at t relative to the target vehicle 1 The speed of the moment;
s2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
s3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
s4: constructing and generating a confrontation network model through a generator network and a discriminator network, and training to generate the confrontation network model;
s5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained confrontation network generation model, and predicting to obtain the future track coordinates of the vehicle.
2. The method for predicting vehicle trajectories based on generation of countermeasure networks according to claim 1, wherein the step S2 is specifically as follows:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
Figure FDA00037986583300000110
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a historical hidden state corresponding to each moment
Figure FDA00037986583300000111
Extracting historical hidden states obtained by the encoder and recording the historical hidden states as historical hidden state vector set
Figure FDA0003798658330000021
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoder
Figure FDA0003798658330000022
The similarity between the hidden state vector of the previous moment and the hidden state vector of the historical track can be obtained
Figure FDA0003798658330000023
Wherein;
s2.4: will obtain e t' Are normalized, i.e.
Figure FDA0003798658330000024
S2.5: normalizing s t Weighted summation of the values and the historical track hidden state obtained by the encoder is obtained at t 2 Input code at time +1
Figure FDA0003798658330000025
S2.6: will be provided with
Figure FDA0003798658330000026
And with
Figure FDA0003798658330000027
The vector passes through a decoder and is output to obtain a predicted time t 2 A hidden state value of +1, i.e.
Figure FDA0003798658330000028
Wherein w is the weight of the decoder;
Figure FDA0003798658330000029
for generating networks at t 2 Hidden state at time +1, hidden layer state of decoder
Figure FDA00037986583300000210
Obtaining the track of the current predicted time through mapping
Figure FDA00037986583300000211
3. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 2, wherein the step S3 is specifically as follows:
and inputting the J predicted tracks and the real tracks into a discrimination network alternately, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, the label of the predicted track is recorded as 0, and the discrimination probability is obtained.
4. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 3, wherein the step S4 is specifically as follows:
s4.1: the loss function for constructing the generated network is:
Figure FDA00037986583300000212
wherein J represents the number of input predicted tracks,
Figure FDA00037986583300000213
representing the discrimination probability of the jth predicted track in the discrimination network;
Figure FDA00037986583300000214
representing the Euclidean distance between the predicted track value and the real track value; m represents the number of track points, and lambda is the weight of the loss function;
the loss function for constructing the discrimination network is:
Figure FDA00037986583300000215
wherein,
Figure FDA0003798658330000031
representing the discrimination probability of the jth real track obtained in the discrimination network;
s4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm;
s4.3: fixing parameters of a discrimination network, training a generation network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value;
s4.4: when the calculation of the judgment probability obtained by the judgment network for the predicted track is close to 1, the judgment network does not distinguish the predicted track from the real track, namely, the training of the generation network and the judgment network is completed.
CN202011157093.0A 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network Active CN112257850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011157093.0A CN112257850B (en) 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011157093.0A CN112257850B (en) 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN112257850A CN112257850A (en) 2021-01-22
CN112257850B true CN112257850B (en) 2022-10-28

Family

ID=74261556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011157093.0A Active CN112257850B (en) 2020-10-26 2020-10-26 Vehicle track prediction method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN112257850B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050640B (en) * 2021-03-18 2022-05-31 北京航空航天大学 Industrial robot path planning method and system based on generation of countermeasure network
CN112949597B (en) * 2021-04-06 2022-11-04 吉林大学 Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN113076599A (en) * 2021-04-15 2021-07-06 河南大学 Multimode vehicle trajectory prediction method based on long-time and short-time memory network
CN113313941B (en) * 2021-05-25 2022-06-24 北京航空航天大学 Vehicle track prediction method based on memory network and encoder-decoder model
CN113435356B (en) * 2021-06-30 2023-02-28 吉林大学 Track prediction method for overcoming observation noise and perception uncertainty
CN113779892B (en) * 2021-09-27 2022-09-02 中国人民解放军国防科技大学 Method for predicting wind speed and wind direction
CN113989326B (en) * 2021-10-25 2023-08-25 电子科技大学 Attention mechanism-based target track prediction method
CN114065870A (en) * 2021-11-24 2022-02-18 中国科学技术大学 Vehicle track generation method and device
CN114279061B (en) * 2021-11-26 2023-07-14 国网北京市电力公司 Method and device for controlling air conditioner and electronic equipment
CN114348019B (en) * 2021-12-20 2023-11-07 清华大学 Vehicle track prediction method, device, computer equipment and storage medium
CN114549930B (en) * 2022-02-21 2023-01-10 合肥工业大学 Rapid road short-time vehicle head interval prediction method based on trajectory data
CN115170607A (en) * 2022-06-17 2022-10-11 中国科学院自动化研究所 Travel track generation method and device, electronic equipment and storage medium
CN114815904B (en) * 2022-06-29 2022-09-27 中国科学院自动化研究所 Attention network-based unmanned cluster countermeasure method and device and unmanned equipment
CN115547040A (en) * 2022-09-19 2022-12-30 河南大学 Driving behavior prediction method based on inner neural network under safety potential field
CN115759383B (en) * 2022-11-11 2023-09-15 桂林电子科技大学 Destination prediction method and system with branch network and electronic equipment
CN118171781B (en) * 2024-05-13 2024-08-13 东南大学 Expressway motor vehicle accident intelligent detection method and system based on real-time track prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781838A (en) * 2019-10-28 2020-02-11 大连海事大学 Multi-modal trajectory prediction method for pedestrian in complex scene
EP3705367A1 (en) * 2019-03-05 2020-09-09 Bayerische Motoren Werke Aktiengesellschaft Training a generator unit and a discriminator unit for collision-aware trajectory prediction
WO2020205629A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3705367A1 (en) * 2019-03-05 2020-09-09 Bayerische Motoren Werke Aktiengesellschaft Training a generator unit and a discriminator unit for collision-aware trajectory prediction
WO2020205629A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system
CN110781838A (en) * 2019-10-28 2020-02-11 大连海事大学 Multi-modal trajectory prediction method for pedestrian in complex scene

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks;Agrim Gupta et al.;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20181217;第2255-2264 页 *
SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints;Amir Sadeghian et al.;《Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR)》;20191231;第1349-1358页 *
基于GAN和注意力机制的行人轨迹预测研究;欧阳俊 等;《激光与光电子学进展》;20191217;第1-20页 *
基于注意力机制的车辆运动轨迹预测;刘创 等;《浙江大学学报(工学版)》;20200630;第54卷(第6期);第1156-1163页 *

Also Published As

Publication number Publication date
CN112257850A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112257850B (en) Vehicle track prediction method based on generation countermeasure network
CN112347567B (en) Vehicle intention and track prediction method
Cai et al. Environment-attention network for vehicle trajectory prediction
CN103605362B (en) Based on motor pattern study and the method for detecting abnormality of track of vehicle multiple features
CN114372116B (en) Vehicle track prediction method based on LSTM and space-time attention mechanism
CN111930110A (en) Intent track prediction method for generating confrontation network by combining society
CN112435503B (en) Intelligent automobile active collision avoidance method for identifying intention of high-risk pedestrians
CN112949597B (en) Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism
CN114399743B (en) Method for generating future track of obstacle
CN114202120A (en) Urban traffic travel time prediction method aiming at multi-source heterogeneous data
CN113554060B (en) LSTM neural network track prediction method integrating DTW
CN114368387B (en) Attention mechanism-based driver intention recognition and vehicle track prediction method
CN117141518A (en) Vehicle track prediction method based on intention perception spatiotemporal attention network
CN113658423B (en) Vehicle track abnormality detection method based on circulation gating unit
Zhu et al. Transfollower: Long-sequence car-following trajectory prediction through transformer
CN115376103A (en) Pedestrian trajectory prediction method based on space-time diagram attention network
CN116595871A (en) Vehicle track prediction modeling method and device based on dynamic space-time interaction diagram
CN112927507B (en) Traffic flow prediction method based on LSTM-Attention
CN117273201A (en) Vehicle future track prediction method based on deep-LSTM coding
CN116923450A (en) Zhou Che track prediction method and device based on attention mechanism and target point information
CN116740664A (en) Track prediction method and device
CN112651577B (en) Tunnel deformation prediction method based on fusion spatio-temporal data
CN110489671B (en) Road charging pile recommendation method based on encoder-decoder model
Xu et al. Vehicle trajectory prediction considering multi-feature independent encoding
CN114565132B (en) Pedestrian track prediction method based on end point prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant