CN112257850A - Vehicle track prediction method based on generation countermeasure network - Google Patents
Vehicle track prediction method based on generation countermeasure network Download PDFInfo
- Publication number
- CN112257850A CN112257850A CN202011157093.0A CN202011157093A CN112257850A CN 112257850 A CN112257850 A CN 112257850A CN 202011157093 A CN202011157093 A CN 202011157093A CN 112257850 A CN112257850 A CN 112257850A
- Authority
- CN
- China
- Prior art keywords
- network
- track
- discrimination
- generation
- predicted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 15
- 239000013598 vector Substances 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 14
- 230000007246 mechanism Effects 0.000 claims description 9
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 8
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention firstly extracts the historical track data of a target vehicle and the historical track data of vehicles around the target vehicle and constructs a generation confrontation network model, the extracted track data is sequentially input into a generation network according to time sequence to obtain a predicted track value, the predicted track value and a real track value are alternately input into a discrimination network to discriminate the difference between the predicted track value and the real track value to obtain a discrimination probability, the discrimination probability is input into the generation network and the discrimination network to obtain the loss values of the two networks, the parameters of the discrimination network and the generation network are reversely updated until the discrimination probability output by the discrimination network is close to 1, the generation confrontation network model is trained well, an attention machine is added into the generation network model, the hidden state information of an encoder is considered at each decoding moment by a decoder to calculate the correlation with the hidden state at the current prediction moment, and obtaining the input code most relevant to the hidden layer state at the current prediction time, and improving the accuracy of the predicted track.
Description
Technical Field
The invention belongs to the technical field of prediction of peripheral vehicle tracks of unmanned vehicles, and particularly relates to a vehicle track prediction method based on a generation countermeasure network.
Background
The unmanned automobile is a comprehensive intelligent system integrating functions of environmental perception, planning decision, multi-level auxiliary driving and the like. The complete unmanned automobile senses surrounding environment information through a sensor, and a decision system is used for replacing a human brain to analyze the current situation and make a reasonable decision so as to control the automobile to execute a corresponding unit. With the development of computer technology, unmanned vehicles are applied to more and more fields, have the characteristics of wide sensing range and no fatigue, can greatly reduce the occurrence of lower traffic accidents, and improve the traffic efficiency of cities.
Although unmanned vehicles have achieved great success in many areas, safety issues are a key issue that has been studied. Since the unmanned vehicle inevitably interacts with surrounding participants in the driving process, the unmanned vehicle without prediction capability is cautious to drive on a highway, and in recent years, the traffic accidents caused by the unmanned vehicle are caused by wrong understanding of the surrounding environment by the unmanned vehicle. Accurate prediction of the trajectory of the surrounding vehicle is an important prerequisite for safer driving of unmanned vehicles and for high quality decision-making and planning. By predicting future travel trajectories using previous observations of surrounding vehicles, the predicted trajectories of surrounding vehicles can be used to plan the movement of the unmanned vehicle to avoid collisions with surrounding vehicles.
The traditional track prediction models comprise a hidden Markov model, a Bayesian model and a Kalman filter, but the models are provided with more constraint conditions and parameters, so that the historical track information cannot be fully utilized, and the fitting effect is poor. The track prediction model also adopts a neural network for prediction, due to the limitation of a network structure, the track prediction can not be carried out on a longer sequence, and the prediction position at a single moment has little significance for the subsequent decision and motion planning of the unmanned vehicle. In order to solve the defects and shortcomings of the traditional method and the method combined with the neural network in the field of vehicle trajectory prediction at present, it is necessary to consider the prediction of a target vehicle time series and the interaction behavior of surrounding vehicles to a target vehicle in the vehicle trajectory prediction process.
Disclosure of Invention
The invention aims to provide a vehicle track prediction method based on a generation countermeasure network, which is used for improving the accuracy of vehicle track prediction.
The technical scheme for solving the technical problems of the invention is as follows: a vehicle track prediction method based on a generation countermeasure network comprises the following steps:
s1: preprocessing data in the NGSIM data set;
s2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
s3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
s4: constructing and generating a confrontation network model through a generator network and a discrimination network, and training to generate the confrontation network model;
s5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained confrontation network generation model, and predicting to obtain the future track coordinates of the vehicle.
The step S1 specifically includes:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on 2, 3 and 4 lanes, and selecting the transverse position, the longitudinal position and the speed in the vehicle data as track characteristics.
S1.3: extracting the target vehicle at t1~t1Track sequence in + n time isWherein,target vehicle and surrounding vehicles of the target vehicle at current t1Sets of trajectory characteristics of time of day, i.e. Indicates that the target vehicle is at t1The lateral position of the moment of time,indicates that the target vehicle is at t1The longitudinal position of the moment of time,indicates that the target vehicle is at t1The speed of the moment in time is,indicates that the target vehicle and the surrounding vehicles are at t1The difference in the lateral distance at the time of day,for the target vehicle and the surrounding vehicles at t1The difference in the longitudinal distance at the time of day,for surrounding vehicles at t relative to the target vehicle1The speed of the moment.
The step S2 specifically includes:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each momentExtracting historical hidden states obtained by an encoder and recording the historical hidden states as a historical hidden state vector set
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoder(it is important to point out that in the present inventionSubscript t of (1)2Andsubscript t of (1)1+1 denotes a different concept, t1Representing respective times, t, in the encoder2Representing each time instant in the decoder), the similarity between the hidden state of the decoder at the last time instant and the hidden state vector of the historical track can be obtainedWherein;
S2.5: s 'after normalization'tWeighted summation of the value and the hidden state of the historical track is obtained to obtain the value of the decoder at t2Input code at time +1
S2.6: will be provided withAndthe vector passes through a decoder and is output to obtain a predicted time t2A value of +1, i.e.Where w is the weight of the decoder.At t for generating a network2Hidden state at time +1, hidden layer state of decoderObtaining the track of the current prediction time through mapping
The step S3 specifically includes:
and alternately inputting the J predicted tracks and the real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, and the label of the predicted track is recorded as 0, so that the discrimination probability is obtained.
The step S4 specifically includes:
s4.1: the loss function for constructing the generated network is:
wherein J represents the number of input predicted tracks,and representing the discrimination probability of the jth predicted track in the discrimination network.Representing the euclidean distance of the predicted trajectory values from the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discrimination network is:
wherein,and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the judgment probability calculation of the judgment network for the predicted track is close to 1, the judgment network does not distinguish the predicted track and the real track, namely the generation of the network and the training of the judgment network are finished.
The invention has the beneficial effects that: the present invention first extracts historical trajectory data (lateral position, longitudinal position, speed) of a target vehicle, and historical trajectory data (lateral distance with respect to the target vehicle, longitudinal distance with respect to the target vehicle, speed with respect to the target vehicle) of surrounding vehicles (front, left, and right vehicles located in the target vehicle) of the target vehicle. Then constructing a generation confrontation network model, inputting the extracted track data into the generation network in sequence according to a time sequence to obtain a predicted track value, alternately inputting the predicted track value and a real track value into a discrimination network, discriminating the difference between the predicted track value and the real track value by the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the generation network discrimination network to obtain loss values of the two networks, and updating parameters of the discrimination network and the generation network in a reverse direction until the discrimination probability output by the discrimination network is close to 1, which indicates that the training of the generation confrontation network model is mature. The invention adds an attention mechanism in the generation of the network model, and solves the problem that the traditional decoder only uses fixed intermediate variable prediction to cause important information loss in long sequence prediction. The decoder considers the hidden state information of the encoder at each decoding moment, calculates the correlation between the hidden state information and the current prediction moment hidden state, obtains the input code most correlated to the current prediction moment hidden state, and improves the accuracy of the prediction track.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a model architecture diagram of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention comprises the steps of:
s1: preprocessing data in the NGSIM data set;
the step S1 specifically includes:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on 2, 3 and 4 lanes, and selecting the transverse position, the longitudinal position and the speed in the vehicle data as track characteristics.
In step S1.2, it may be set that the unmanned vehicle travels on 3 lanes, the target vehicle is a vehicle in front of and closest to the unmanned vehicle, the surrounding vehicles of the target vehicle are front vehicles of the target vehicle on the same lane, and left and right vehicles on left and right lanes of the target vehicle. The left vehicle is the vehicle with the 2 lanes closest to the target vehicle, and the right vehicle is the vehicle with the 4 lanes closest to the target vehicle.
S1.3: extracting the target vehicle at t1~t1Track sequence in + n time isWherein,target vehicle and surrounding vehicles of the target vehicle at current t1Sets of trajectory characteristics of time of day, i.e. Indicates that the target vehicle is at t1The lateral position of the moment of time,indicates that the target vehicle is at t1The longitudinal position of the moment of time,indicates that the target vehicle is at t1The speed of the moment in time is,indicates that the target vehicle and the surrounding vehicles are at t1The difference in the lateral distance at the time of day,for the target vehicle and the surrounding vehicles at t1The difference in the longitudinal distance at the time of day,for surrounding vehicles at t relative to the target vehicle1The speed of the moment.
S2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
the step S2 specifically includes:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
In step S2.1, the generating network comprises a fully connected layer and LSTM encoder and decoder and attention mechanism, willFirstly, inputting the data into a full connection layer, and outputting the data from the full connection layer to obtain a fixed-length characteristic space sequence received by a network
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each momentExtracting historical hidden states obtained by an encoder and recording the historical hidden states as a historical hidden state vector set
Step S2.2, specifically, the feature space sequence L is input into an encoder for generating a network for encoding, an initial hidden state and a context vector of the encoder are initialized, and a corresponding step length track sequence point L output by a full connection layer is input into each LSTM unittEach LSTM network module comprises a forgetting gate, an input gate and an output gate, the beginning part of the module corresponds to the forgetting gate, and the updating formula of each track point corresponding to the forgetting gate is as follows: f. oft=σ(w11Lt+w12ht-1+bf) Where σ is sigmoid functionftFor forgetting the output of the gate, w11、w12Weight vector for forgetting gate, LtIs an input value of the current time, ht-1Hidden state at the previous moment, bfTo forget the biasing of the door.
The input information is a value between (0, 1) obtained by passing through a forgetting gate, the middle part of the module is an input gate, and the updating formula of the input gate is it=σ(w21Lt+w22ht-1+bi) Wherein w is21、w22As weight vector of input gate, biFor the bias of the input gate, the cell state update formula:where tanh is the activation function of the input gate. w is a31、w32Is a weight vector of the tanh layer, bcIs an offset.
The right part of the module is an output gate, and the formula for updating the output gate is as follows: ot=σ(w41Lt+w42ht-1+b0) Wherein w is41、w42As weight vectors of output gates, b0For the output gate bias, the hidden state update formula is:hidden state h output by LSTM unit of each layertAnd a cell unit ctPassing to next LSTM unit, extracting all historical hidden state vector sets in encoder
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoder(it is important to point out that in the present inventionSubscript t of (1)2Andsubscript t of (1)1+1 denotes a different concept, t1Representing respective times, t, in the encoder2Representing each time instant in the decoder), the similarity between the hidden state at the previous time instant and the hidden state vector of the historical track can be obtainedWherein;
S2.5: s 'after normalization'tWeighted summation of the value and the hidden state of the historical track is obtained to obtain the value of the decoder at t2Input code at time +1
S2.6: will be provided withAndthe vector passes through a decoder and is output to obtain a predicted time t2A value of +1, i.e.Where w is the weight of the decoder.At t for generating a network2Hidden state at time +1, hidden layer state of decoderObtaining the track of the current prediction time through mapping
S3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
the step S3 specifically includes:
and alternately inputting the J predicted tracks and the real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, and the label of the predicted track is recorded as 0, so that the discrimination probability is obtained.
The detailed process of step S3 is:
alternately inputting J predicted tracks and real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLP networks, the predicted tracks and the real tracks are changed from multi-dimension to one-dimension through MLP, the labels of the real tracks are recorded as 1, the labels of the predicted tracks are recorded as 0, the real tracks and the predicted tracks are alternately input into the discrimination network to obtain discrimination probability, and the formula for constructing the discrimination probability is as follows:wherein, wm1Is the weight of the first layer MLP, bm1Is the bias of the MLP for that layer. Will obtainAnd inputting the trajectory into the second layer MLP to obtain the final discrimination probability of the trajectory. Namely, it isWherein wm2Is the weight of the MLP of the layer, bm2For the biasing of the MLP of this layer,and i represents the discrimination probability obtained by the track, and i represents the label of the track.
S4: constructing and generating a confrontation network model through a generator network and a discriminator network, and training to generate the confrontation network model;
the step S4 specifically includes:
s4.1: the loss function for constructing the generated network is:
wherein J represents the number of input predicted tracks,and representing the discrimination probability of the jth predicted track in the discrimination network.Representing the euclidean distance of the predicted trajectory values from the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discrimination network is:
wherein,and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the judgment probability calculation of the judgment network on the predicted track is close to 1, the judgment network cannot distinguish the predicted track from the real track, namely the generation of the network and the training of the judgment network are finished.
S5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained confrontation network generation model, and predicting to obtain the future track coordinates of the vehicle.
The present invention first extracts the historical trajectory data (lateral position, longitudinal position, speed) of the target vehicle, the historical trajectory data (lateral distance with respect to the target vehicle, longitudinal distance with respect to the target vehicle, speed with respect to the target vehicle) of the surrounding vehicles (front vehicle, left vehicle, right vehicle located in the target vehicle) of the target vehicle by the present invention. Then constructing a generation confrontation network model, inputting the extracted track data into the generation network in sequence according to a time sequence to obtain a predicted track value, alternately inputting the predicted track value and a real track value into a discrimination network, discriminating the difference between the predicted track value and the real track value by the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the generation network and the discrimination network to obtain loss values of the two networks, and updating parameters of the discrimination network and the generation network in a reverse manner until the discrimination probability output by the discrimination network is close to 1, which indicates that the generation confrontation network model is mature in training. The invention adds an attention mechanism in the generation of the network model, and solves the problem that the traditional decoder only uses fixed intermediate variable prediction to cause important information loss in long sequence prediction. The decoder considers the hidden state information of the encoder at each decoding moment, calculates the correlation between the hidden state information and the current prediction moment hidden state, obtains the input code most correlated to the current prediction moment hidden state, and improves the prediction accuracy.
Claims (5)
1. A vehicle track prediction method based on a generation countermeasure network is characterized by comprising the following steps:
s1: preprocessing data in the NGSIM data set;
s2: adding an attention mechanism on the basis of an LSTM encoder-decoder, and taking the whole as a generator network;
s3: constructing a discrimination network based on an MLP neural network, and inputting a predicted track and a real track to obtain a discrimination probability;
s4: constructing and generating a confrontation network model through a generator network and a discriminator network, and training to generate the confrontation network model;
s5: and storing the trained model, selecting a test data set from the preprocessed data set, inputting the test data into the trained confrontation network generation model, and predicting to obtain the future track coordinates of the vehicle.
2. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 1, wherein the step S1 is specifically as follows:
s1.1, processing the NGSIM data set through a smoothing filter, and eliminating abnormal data;
s1.2: and selecting track data on 2, 3 and 4 lanes, and selecting the transverse position, the longitudinal position and the speed in the vehicle data as track characteristics.
S1.3: extracting the target vehicle at t1~t1Track sequence in + n time isWherein,target vehicle and surrounding vehicles of the target vehicle at current t1Sets of trajectory characteristics of time of day, i.e. Indicates that the target vehicle is at t1The lateral position of the moment of time,indicates that the target vehicle is at t1The longitudinal position of the moment of time,indicates that the target vehicle is at t1The speed of the moment in time is,indicates that the target vehicle and the surrounding vehicles are at t1The difference in the lateral distance at the time of day,for the target vehicle and the surrounding vehicles at t1The difference in the longitudinal distance at the time of day,for surrounding vehicles at t relative to the target vehicle1The speed of the moment.
3. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 2, wherein the step S2 is specifically as follows:
s2.1: inputting the track sequence extracted in S1.3 into the full connection layer to obtain the characteristic space sequence received by the network
S2.2: inputting the characteristic space sequence L into an LSTM encoder, and encoding to obtain a history hidden state corresponding to each momentExtracting historical hidden states obtained by an encoder and recording the historical hidden states as a historical hidden state vector set
S2.3: adding attention mechanism before decoder decoding, and knowing hidden state at last moment in decoderThe similarity between the hidden state at the last moment and the hidden state vector of the historical track can be obtainedWherein;
S2.5: s 'after normalization'tThe weighted summation of the value and the historical track hidden state obtained by the encoder is obtained to obtain the value of the decoder at t2Input code at time +1
S2.6: will be provided withAndthe vector passes through a decoder and is output to obtain a predicted time t2Hidden state value of +1, i.e.Where w is the weight of the decoder.At t for generating a network2Hidden state at time +1, hidden layer state of decoderObtaining the track of the current prediction time through mapping
4. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 3, wherein the step S3 is specifically as follows:
and alternately inputting the J predicted tracks and the real tracks into a discrimination network, wherein the discrimination network consists of two layers of MLPs, the label of the real track is recorded as 1, and the label of the predicted track is recorded as 0, so that the discrimination probability is obtained.
5. The method for predicting vehicle trajectories based on generation of countermeasure network as claimed in claim 4, wherein the step S4 is specifically as follows:
s4.1: the loss function for constructing the generated network is:
wherein J represents the number of input predicted tracks,and representing the discrimination probability of the jth predicted track in the discrimination network.Representing the euclidean distance of the predicted trajectory values from the true trajectory values. m represents the number of trace points, and lambda is the weight of the loss function.
The loss function for constructing the discrimination network is:
wherein,and representing the discrimination probability of the jth real track in the discrimination network.
S4.2: fixing parameters of a generated network, training a discrimination network, alternately inputting a real track and a predicted track into the discrimination network to obtain a discrimination probability, inputting the discrimination probability into the discrimination network and the generated network to calculate a loss value, and updating the parameters of the discrimination network by using an Adam algorithm.
S4.3: fixing the parameters of the discrimination network, training the generation network, alternately inputting the real track and the predicted track into the discrimination network to obtain the discrimination probability, inputting the discrimination probability into the discrimination network and the generation network to calculate to obtain a loss value, and adjusting the parameters of the generation network by using an Adam algorithm according to the loss value.
S4.4: when the judgment probability calculation of the judgment network for the predicted track is close to 1, the judgment network does not distinguish the predicted track and the real track, namely the generation of the network and the training of the judgment network are finished.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011157093.0A CN112257850B (en) | 2020-10-26 | 2020-10-26 | Vehicle track prediction method based on generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011157093.0A CN112257850B (en) | 2020-10-26 | 2020-10-26 | Vehicle track prediction method based on generation countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112257850A true CN112257850A (en) | 2021-01-22 |
CN112257850B CN112257850B (en) | 2022-10-28 |
Family
ID=74261556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011157093.0A Active CN112257850B (en) | 2020-10-26 | 2020-10-26 | Vehicle track prediction method based on generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112257850B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112949597A (en) * | 2021-04-06 | 2021-06-11 | 吉林大学 | Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism |
CN113050640A (en) * | 2021-03-18 | 2021-06-29 | 北京航空航天大学 | Industrial robot path planning method and system based on generation of countermeasure network |
CN113076599A (en) * | 2021-04-15 | 2021-07-06 | 河南大学 | Multimode vehicle trajectory prediction method based on long-time and short-time memory network |
CN113313941A (en) * | 2021-05-25 | 2021-08-27 | 北京航空航天大学 | Vehicle track prediction method based on memory network and encoder-decoder model |
CN113435356A (en) * | 2021-06-30 | 2021-09-24 | 吉林大学 | Track prediction method for overcoming observation noise and perception uncertainty |
CN113779892A (en) * | 2021-09-27 | 2021-12-10 | 中国人民解放军国防科技大学 | Wind speed and wind direction prediction method |
CN113989326A (en) * | 2021-10-25 | 2022-01-28 | 电子科技大学 | Target track prediction method based on attention mechanism |
CN114279061A (en) * | 2021-11-26 | 2022-04-05 | 国网北京市电力公司 | Method and device for controlling air conditioner and electronic equipment |
CN114348019A (en) * | 2021-12-20 | 2022-04-15 | 清华大学 | Vehicle trajectory prediction method, vehicle trajectory prediction device, computer equipment and storage medium |
CN114549930A (en) * | 2022-02-21 | 2022-05-27 | 合肥工业大学 | Rapid road short-time vehicle head interval prediction method based on trajectory data |
CN114815904A (en) * | 2022-06-29 | 2022-07-29 | 中国科学院自动化研究所 | Attention network-based unmanned cluster countermeasure method and device and unmanned equipment |
CN115170607A (en) * | 2022-06-17 | 2022-10-11 | 中国科学院自动化研究所 | Travel track generation method and device, electronic equipment and storage medium |
CN115759383A (en) * | 2022-11-11 | 2023-03-07 | 桂林电子科技大学 | Destination prediction method and system with branch network and electronic equipment |
CN118171781A (en) * | 2024-05-13 | 2024-06-11 | 东南大学 | Expressway motor vehicle accident intelligent detection method and system based on real-time track prediction |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110781838A (en) * | 2019-10-28 | 2020-02-11 | 大连海事大学 | Multi-modal trajectory prediction method for pedestrian in complex scene |
EP3705367A1 (en) * | 2019-03-05 | 2020-09-09 | Bayerische Motoren Werke Aktiengesellschaft | Training a generator unit and a discriminator unit for collision-aware trajectory prediction |
WO2020205629A1 (en) * | 2019-03-29 | 2020-10-08 | Intel Corporation | Autonomous vehicle system |
-
2020
- 2020-10-26 CN CN202011157093.0A patent/CN112257850B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3705367A1 (en) * | 2019-03-05 | 2020-09-09 | Bayerische Motoren Werke Aktiengesellschaft | Training a generator unit and a discriminator unit for collision-aware trajectory prediction |
WO2020205629A1 (en) * | 2019-03-29 | 2020-10-08 | Intel Corporation | Autonomous vehicle system |
CN110781838A (en) * | 2019-10-28 | 2020-02-11 | 大连海事大学 | Multi-modal trajectory prediction method for pedestrian in complex scene |
Non-Patent Citations (4)
Title |
---|
AGRIM GUPTA ET AL.: "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
AMIR SADEGHIAN ET AL.: "SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 * |
刘创 等: "基于注意力机制的车辆运动轨迹预测", 《浙江大学学报(工学版)》 * |
欧阳俊 等: "基于GAN和注意力机制的行人轨迹预测研究", 《激光与光电子学进展》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113050640A (en) * | 2021-03-18 | 2021-06-29 | 北京航空航天大学 | Industrial robot path planning method and system based on generation of countermeasure network |
CN112949597A (en) * | 2021-04-06 | 2021-06-11 | 吉林大学 | Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism |
CN112949597B (en) * | 2021-04-06 | 2022-11-04 | 吉林大学 | Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism |
CN113076599A (en) * | 2021-04-15 | 2021-07-06 | 河南大学 | Multimode vehicle trajectory prediction method based on long-time and short-time memory network |
CN113313941B (en) * | 2021-05-25 | 2022-06-24 | 北京航空航天大学 | Vehicle track prediction method based on memory network and encoder-decoder model |
CN113313941A (en) * | 2021-05-25 | 2021-08-27 | 北京航空航天大学 | Vehicle track prediction method based on memory network and encoder-decoder model |
CN113435356A (en) * | 2021-06-30 | 2021-09-24 | 吉林大学 | Track prediction method for overcoming observation noise and perception uncertainty |
CN113435356B (en) * | 2021-06-30 | 2023-02-28 | 吉林大学 | Track prediction method for overcoming observation noise and perception uncertainty |
CN113779892A (en) * | 2021-09-27 | 2021-12-10 | 中国人民解放军国防科技大学 | Wind speed and wind direction prediction method |
CN113989326A (en) * | 2021-10-25 | 2022-01-28 | 电子科技大学 | Target track prediction method based on attention mechanism |
CN113989326B (en) * | 2021-10-25 | 2023-08-25 | 电子科技大学 | Attention mechanism-based target track prediction method |
CN114279061A (en) * | 2021-11-26 | 2022-04-05 | 国网北京市电力公司 | Method and device for controlling air conditioner and electronic equipment |
CN114348019A (en) * | 2021-12-20 | 2022-04-15 | 清华大学 | Vehicle trajectory prediction method, vehicle trajectory prediction device, computer equipment and storage medium |
CN114348019B (en) * | 2021-12-20 | 2023-11-07 | 清华大学 | Vehicle track prediction method, device, computer equipment and storage medium |
CN114549930A (en) * | 2022-02-21 | 2022-05-27 | 合肥工业大学 | Rapid road short-time vehicle head interval prediction method based on trajectory data |
CN115170607A (en) * | 2022-06-17 | 2022-10-11 | 中国科学院自动化研究所 | Travel track generation method and device, electronic equipment and storage medium |
CN114815904A (en) * | 2022-06-29 | 2022-07-29 | 中国科学院自动化研究所 | Attention network-based unmanned cluster countermeasure method and device and unmanned equipment |
CN115759383A (en) * | 2022-11-11 | 2023-03-07 | 桂林电子科技大学 | Destination prediction method and system with branch network and electronic equipment |
CN115759383B (en) * | 2022-11-11 | 2023-09-15 | 桂林电子科技大学 | Destination prediction method and system with branch network and electronic equipment |
CN118171781A (en) * | 2024-05-13 | 2024-06-11 | 东南大学 | Expressway motor vehicle accident intelligent detection method and system based on real-time track prediction |
Also Published As
Publication number | Publication date |
---|---|
CN112257850B (en) | 2022-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112257850B (en) | Vehicle track prediction method based on generation countermeasure network | |
CN112347567B (en) | Vehicle intention and track prediction method | |
CN112215337B (en) | Vehicle track prediction method based on environment attention neural network model | |
CN112965499B (en) | Unmanned vehicle driving decision-making method based on attention model and deep reinforcement learning | |
CN103605362B (en) | Based on motor pattern study and the method for detecting abnormality of track of vehicle multiple features | |
CN111079590A (en) | Peripheral vehicle behavior pre-judging method of unmanned vehicle | |
CN112949597B (en) | Vehicle track prediction and driving manipulation identification method based on time mode attention mechanism | |
CN114399743B (en) | Method for generating future track of obstacle | |
CN114202120A (en) | Urban traffic travel time prediction method aiming at multi-source heterogeneous data | |
CN113554060B (en) | LSTM neural network track prediction method integrating DTW | |
CN117141518A (en) | Vehicle track prediction method based on intention perception spatiotemporal attention network | |
CN115158364A (en) | Method for joint prediction of driving intention and track of surrounding vehicle by automatic driving vehicle | |
Zhu et al. | Transfollower: Long-sequence car-following trajectory prediction through transformer | |
CN115376103A (en) | Pedestrian trajectory prediction method based on space-time diagram attention network | |
Sharma et al. | Kernelized convolutional transformer network based driver behavior estimation for conflict resolution at unsignalized roundabout | |
CN114368387B (en) | Attention mechanism-based driver intention recognition and vehicle track prediction method | |
CN117709602B (en) | Urban intelligent vehicle personification decision-making method based on social value orientation | |
CN112927507B (en) | Traffic flow prediction method based on LSTM-Attention | |
CN116740664A (en) | Track prediction method and device | |
CN111443701A (en) | Unmanned vehicle/robot behavior planning method based on heterogeneous deep learning | |
CN110489671B (en) | Road charging pile recommendation method based on encoder-decoder model | |
Xu et al. | Vehicle trajectory prediction considering multi-feature independent encoding | |
CN114565132B (en) | Pedestrian track prediction method based on end point prediction | |
Zhang et al. | Overtaking Behavior Prediction of Rear Vehicle via LSTM Model | |
CN116959260B (en) | Multi-vehicle driving behavior prediction method based on graph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |