CN116401549A - Vehicle track prediction model training method, device, equipment and storage medium - Google Patents

Vehicle track prediction model training method, device, equipment and storage medium Download PDF

Info

Publication number
CN116401549A
CN116401549A CN202310368938.8A CN202310368938A CN116401549A CN 116401549 A CN116401549 A CN 116401549A CN 202310368938 A CN202310368938 A CN 202310368938A CN 116401549 A CN116401549 A CN 116401549A
Authority
CN
China
Prior art keywords
vehicle
target
sequence
preset
running
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310368938.8A
Other languages
Chinese (zh)
Inventor
崔茂源
王超
韩佳琪
吕颖
孙连明
冷德龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202310368938.8A priority Critical patent/CN116401549A/en
Publication of CN116401549A publication Critical patent/CN116401549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a vehicle track prediction model training method, device, equipment and storage medium. The method comprises the following steps: acquiring vehicle state data; selecting a target running vehicle and a reference running vehicle from all running vehicles; determining a candidate running vehicle adjacent to the target running vehicle according to the position distance between each reference running vehicle and the target running vehicle; according to vehicle state data corresponding to the target running vehicle and the candidate running vehicle, determining a real track sequence of the target running vehicle under a preset future observation period, and determining a first historical state sequence and a second historical state sequence of the target running vehicle under a preset historical observation period; inputting a first historical state sequence, a second historical state sequence and a real track sequence corresponding to a target running vehicle into a track prediction network model constructed in advance, and performing model training on the track prediction network model to obtain a target vehicle track prediction model which is used for predicting the running track of the vehicle.

Description

Vehicle track prediction model training method, device, equipment and storage medium
Technical Field
The present invention relates to the field of vehicle track prediction technologies, and in particular, to a vehicle track prediction model training method, apparatus, device, and storage medium.
Background
In recent years, intelligent driving has become an emerging research area for applying intelligence. For intelligent vehicles, the correct motion planning and behavior decisions depend not only on the accurate positioning of the intelligent vehicle on its surrounding vehicles, but also on the accurate prediction of the future trajectories of the intelligent vehicle on its surrounding vehicles. However, in a complex traffic scenario, the interaction between vehicles causes uncertainty of driving intention, so that the future track of the vehicle has a multi-mode characteristic, which greatly improves the difficulty of predicting the future track of the surrounding vehicle by the intelligent vehicle.
The existing vehicle track prediction method mainly comprises a method based on a physical model, a method based on driving intention, a method based on interaction and the like, however, the existing vehicle track prediction method ignores the interaction between vehicles and does not fully consider the interaction between vehicles, so that the predicted track result is single, uncertainty of the vehicle predicted track is difficult to describe, and reliability and accuracy of the vehicle track prediction are low.
Disclosure of Invention
The invention provides a vehicle track prediction model training method, device, equipment and storage medium, which are used for improving the reliability and accuracy of vehicle track prediction.
According to an aspect of the present invention, there is provided a vehicle trajectory prediction model training method, the method comprising:
acquiring vehicle state data of each traveling vehicle in a preset traveling area in a preset time period;
selecting any one of the running vehicles as a target running vehicle; wherein the other running vehicles except the target running vehicle are reference running vehicles;
determining a candidate running vehicle adjacent to the target running vehicle from among the reference running vehicles according to a position distance between each of the reference running vehicles and the target running vehicle;
according to the vehicle state data respectively corresponding to the target running vehicle and the candidate running vehicle, determining a real track sequence of the target running vehicle in a preset future observation period, and determining a first historical state sequence and a second historical state sequence of the target running vehicle in a preset historical observation period;
inputting the first historical state sequence, the second historical state sequence and the real track sequence corresponding to the target running vehicle into a track prediction network model constructed in advance, and performing model training on the track prediction network model to obtain a target vehicle track prediction model for predicting the running track of the vehicle.
According to another aspect of the present invention, there is provided a vehicle trajectory prediction model training device including:
the state data acquisition module is used for acquiring vehicle state data of each running vehicle in a preset running area in a preset time period;
the target vehicle determining module is used for selecting any running vehicle from the running vehicles as a target running vehicle; wherein the other running vehicles except the target running vehicle are reference running vehicles;
a candidate vehicle determination module configured to determine a candidate vehicle that is adjacent to the target vehicle from among the reference vehicles according to a position distance between each of the reference vehicles and the target vehicle;
the state sequence determining module is used for determining a real track sequence of the target running vehicle in a preset future observation period according to vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively, and determining a first historical state sequence and a second historical state sequence of the target running vehicle in a preset historical observation period;
the prediction model training module is used for inputting the first historical state sequence, the second historical state sequence and the real track sequence corresponding to the target running vehicle into a track prediction network model constructed in advance, and carrying out model training on the track prediction network model to obtain a target vehicle track prediction model which is used for predicting the running track of the vehicle.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the vehicle trajectory prediction model training method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the vehicle trajectory prediction model training method according to any one of the embodiments of the present invention when executed.
According to the technical scheme, any running vehicle is selected from all running vehicles to serve as a target running vehicle, a reference running vehicle is determined, and a candidate running vehicle adjacent to the target running vehicle is determined according to the position distance between each reference running vehicle and the target running vehicle; according to the vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively, determining a real track sequence of the target running vehicle, and determining a first historical state sequence and a second historical state sequence of the target running vehicle, accurate construction of a sample training set for training a vehicle track prediction model is achieved, interaction among vehicles is comprehensively reflected in the sample training set, the situation that the training accuracy of the vehicle track prediction model is low due to the fact that the sample training set is too single, and the track result is relatively single is avoided, the track prediction network model is trained in a model by inputting the first historical state sequence, the second historical state sequence and the real track sequence into a track prediction network model constructed in advance, the target vehicle track prediction model is obtained, training accuracy of the track prediction network model is achieved, and the follow-up prediction reliability and accuracy for vehicle track prediction are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a vehicle trajectory prediction model training method according to a first embodiment of the present invention;
FIG. 2A is a flowchart of a method for training a vehicle trajectory prediction model according to a second embodiment of the present invention;
fig. 2B is a schematic diagram of a model structure of a trajectory prediction network model according to a second embodiment of the present invention;
fig. 2C is a schematic diagram of a model structure of a trace prediction test network model according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a training device for a vehicle track prediction model according to a third embodiment of the present invention;
Fig. 4 is a schematic structural diagram of an electronic device implementing a vehicle trajectory prediction model training method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a vehicle track prediction model training method according to an embodiment of the present invention, where the method may be performed by a vehicle track prediction model training device, the vehicle track prediction model training device may be implemented in hardware and/or software, and the vehicle track prediction model training device may be configured in an electronic device. The method specifically comprises the following steps:
s110, acquiring vehicle state data of each traveling vehicle in a preset traveling area in a preset time period.
The preset time period and the preset travel area may be preset by a related technician. Wherein the traveling vehicle may be a vehicle that appears within a preset time period and a preset traveling area; the vehicle state data may include a lateral position, a longitudinal position, a travel speed, and a yaw angle of the vehicle.
For example, vehicle state data of all traveling vehicles in the same preset traveling area may be collected according to a preset time.
S120, selecting any running vehicle from all running vehicles as a target running vehicle; wherein, other running vehicles than the target running vehicle are reference running vehicles.
For example, it is possible to select any one of the running vehicles as the target running vehicle, and to use, as the reference running vehicle, the other running vehicles than the target running vehicle among the running vehicles. Therefore, each traveling vehicle can be regarded as the target traveling vehicle.
S130, determining a candidate running vehicle adjacent to the target running vehicle from the reference running vehicles according to the position distance between the reference running vehicles and the target running vehicle.
For example, all reference traveling vehicles that appear within a circular range of a preset radius distance with the centroid of the target traveling vehicle as an origin may be determined as the candidate traveling vehicle that is adjacent to the target traveling vehicle. The preset radius distance may be preset by a related technician according to actual requirements, for example, the preset radius distance may be 30 meters.
S140, determining a real track sequence of the target running vehicle in a preset future observation period and determining a first historical state sequence and a second historical state sequence of the target running vehicle in a preset historical observation period according to vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively.
The preset future observation period and the preset history observation period can be preset by related technicians according to actual requirements.
For example, data conforming to a period time period of a preset future observation period may be extracted from vehicle state data of a target running vehicle as a real track sequence under the preset future observation period; extracting data which is consistent with a period time period of a preset historical observation period from vehicle state data of a target running vehicle as a first historical state sequence under the preset historical observation period; and extracting data which is consistent with the period time period of the preset historical observation period from the vehicle state data of the candidate running vehicle as a second historical state sequence under the preset historical observation period.
In an alternative embodiment, determining a real track sequence of the target running vehicle under a preset future observation period and determining a first historical state sequence and a second historical state sequence of the target running vehicle under a preset historical observation period according to vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively includes: according to the vehicle state data of the target running vehicle, constructing a first historical state sequence of the target running vehicle in a preset historical observation time period; selecting a target candidate vehicle matched with a preset historical observation time period from candidate running vehicles adjacent to the target running vehicle; according to the vehicle state data of the target candidate vehicle, constructing a history state sequence of the target candidate vehicle in a preset history observation time period, and taking the history state sequence of the target candidate vehicle as a second history state sequence of the target running vehicle; and constructing a real track sequence of the target running vehicle in a preset future observation time period according to the vehicle state data of the target running vehicle.
For example, for the extraction of the first historical state sequence, a preset historical observation period may be definedLength of time T obs If the first history state sequence is defined as S tar It should be noted that, since any one of the running vehicles can be used as the target running vehicle, when any one of the running vehicles is used as the target running vehicle, the corresponding first history state sequence, second history state sequence and real track sequence are corresponding, and the corresponding first history state sequence S tar In (1) may represent a target traveling vehicle, the first history state sequence is S tar The expression of (2) is as follows:
Figure BDA0004167978900000071
wherein,,
Figure BDA0004167978900000072
t∈{1,2,…,T obs -vehicle state data of the target running vehicle at time t; />
Figure BDA0004167978900000073
The transverse position of a target running vehicle tar at the moment t is represented; />
Figure BDA0004167978900000074
The longitudinal position of a target running vehicle tar at the moment t is represented; />
Figure BDA0004167978900000075
The running speed of the target running vehicle tar at the time t is represented; />
Figure BDA0004167978900000076
The yaw angle of the target traveling vehicle tar at time t is indicated.
For example, a candidate traveling vehicle matching a preset history observation period is selected from among candidate traveling vehicles adjacent to the target traveling vehicle as the target candidate vehicle. For example, if at T obs And if N candidate running vehicles exist in the time range, taking the N candidate running vehicles as target candidate vehicles. According to target candidate vehicle And constructing a historical state sequence Nbrs of the target candidate vehicle in a preset historical observation time period according to vehicle state data of the vehicle, wherein the historical state sequence Nbrs of the target candidate vehicle is expressed as follows:
Figure BDA0004167978900000077
wherein,,
Figure BDA0004167978900000078
i epsilon {1,2, …, N } is the historical state sequence of the ith target candidate vehicle; />
Figure BDA0004167978900000079
j∈{1,2,…,T obs Vehicle state data of the target candidate vehicle i at the moment j; wherein (1)>
Figure BDA00041679789000000710
The lateral position of the target candidate vehicle i at the moment j is represented; />
Figure BDA00041679789000000711
Representing the longitudinal position of the target candidate vehicle i at the moment j; />
Figure BDA00041679789000000712
Representing the running speed of the target candidate vehicle i at the moment j; />
Figure BDA0004167978900000081
The yaw angle of the target candidate vehicle i at time j is indicated.
Illustratively, the historical state sequence Nbrs of the target candidate vehicle is taken as the second historical state sequence of the target traveling vehicle.
For example, for the extraction of the sequence of real trajectories, a time length T of a preset future observation period may be defined pred Obtaining a real track sequence Truth of the target driving vehicle tar:
Figure BDA0004167978900000082
wherein,,
Figure BDA0004167978900000083
k∈{1,2,…,T pred t is } is obs Real position information in vehicle state data of the target traveling vehicle at +k time.
S150, inputting a first historical state sequence, a second historical state sequence and a real track sequence corresponding to the target running vehicle into a track prediction network model constructed in advance, and performing model training on the track prediction network model to obtain a target vehicle track prediction model for predicting the running track of the vehicle.
The trajectory prediction network model may be preset by a related technician, for example, the trajectory prediction network model may be an existing neural network model. Alternatively, the trajectory prediction network model may be pre-constructed by the relevant technician.
The method comprises the steps of inputting a first historical state sequence, a second historical state sequence and a real track sequence corresponding to a target running vehicle into a track prediction network model which is built in advance, carrying out model training on the track prediction network model, iteratively updating model parameters, and completing training on the track prediction network model when the model meets a preset model convergence requirement to obtain the target vehicle track prediction model.
The target vehicle track prediction model is used for accurately predicting a vehicle running track, specifically, vehicle state data of a target vehicle to be predicted may be input into the target vehicle track prediction model, and the vehicle track prediction model outputs position information of a track of the target vehicle to be predicted, which is most likely to run in the future.
According to the technical scheme, any running vehicle is selected from all running vehicles to serve as a target running vehicle, a reference running vehicle is determined, and a candidate running vehicle adjacent to the target running vehicle is determined according to the position distance between each reference running vehicle and the target running vehicle; according to the vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively, determining a real track sequence of the target running vehicle, and determining a first historical state sequence and a second historical state sequence of the target running vehicle, accurate construction of a sample training set for training a vehicle track prediction model is achieved, interaction among vehicles is comprehensively reflected in the sample training set, the situation that the training accuracy of the vehicle track prediction model is low due to the fact that the sample training set is too single, and the track result is relatively single is avoided, the track prediction network model is trained in a model by inputting the first historical state sequence, the second historical state sequence and the real track sequence into a track prediction network model constructed in advance, the target vehicle track prediction model is obtained, training accuracy of the track prediction network model is achieved, and the follow-up prediction reliability and accuracy for vehicle track prediction are improved.
Example two
Fig. 2 is a flowchart of a vehicle track prediction model training method according to a second embodiment of the present invention, where the present embodiment is optimized and improved based on the above technical solutions.
Further, the track prediction network model comprises an input coding module, a spatial attention module connected with the output end of the input coding module, a driving intention learning module connected with the output end of the spatial attention module, and an output decoding module connected with the output end of the driving intention learning module.
Further, inputting a first historical state sequence, a second historical state sequence and a real track sequence corresponding to the target running vehicle into a track prediction network model constructed in advance, performing model training on the track prediction network model to obtain a target vehicle track prediction model, inputting the first historical state sequence, the second historical state sequence and the real track sequence into an input coding module for sequence coding processing to obtain a first state coding sequence, a second state coding sequence and a real track coding sequence; determining characteristic fusion coding parameters of the target driving vehicle based on the spatial attention module and the driving intention learning module according to the first state coding sequence and the second state coding sequence; determining time domain fusion coding parameters of the target driving vehicle based on the driving intention learning module according to the characteristic fusion coding parameters and the real track coding sequence; determining driving intention vector parameters based on a driving intention learning module according to the time domain fusion coding parameters; according to the driving intention vector parameters and the feature fusion coding parameters, determining driving intention output parameters based on an output decoding module; and respectively adjusting weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until a preset training ending condition is met, so as to obtain a target vehicle track prediction model. And the model structure of the track prediction network model is perfected, and the model training mode of the track prediction network model is perfected. In the embodiments of the present invention, the descriptions of other embodiments may be referred to in the portions not described in detail.
As shown in fig. 2A, the method comprises the following specific steps:
s210, acquiring vehicle state data of each traveling vehicle in a preset traveling area in a preset time period.
S220, selecting any running vehicle from all running vehicles as a target running vehicle; wherein, other running vehicles than the target running vehicle are reference running vehicles.
S230, determining a candidate running vehicle adjacent to the target running vehicle from the reference running vehicles according to the position distance between the reference running vehicles and the target running vehicle.
S240, determining a real track sequence of the target running vehicle under a preset future observation period and determining a first historical state sequence and a second historical state sequence of the target running vehicle under a preset historical observation period according to vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively.
It should be noted that the trajectory prediction network model for performing the trajectory prediction may be constructed in advance by the relevant technician. The pre-constructed trajectory prediction network model may include an input encoding module, a spatial attention module connected to an output of the input encoding module, a driving intention learning module connected to an output of the spatial attention module, and an output decoding module connected to an output of the driving intention learning module. The module structure and function of the different modules are different.
S250, inputting the first historical state sequence, the second historical state sequence and the real track sequence into an input coding module in a track prediction network model constructed in advance to perform sequence coding processing, so as to obtain a first state coding sequence, a second state coding sequence and the real track coding sequence.
The input encoding module may be a module for encoding input sequence data, and may include encoders corresponding to the number of input sample features, and each encoder may be composed of a GRU (gate loop unit, gated recurrent unit).
For example, the first historical state sequence, the second historical state sequence, and the actual track sequence inputs may be respectively input to respective encoders in the input encoding module for encoding processing by the GRU. The first history state sequence, the second history state sequence and the real track sequence of the target running vehicle tar are respectively used for S tar Nbrs and Truth tar And (3) representing. Using pairs of encoders in an input encoding module S tar Nbrs and Truth tar And (3) respectively coding:
Figure BDA0004167978900000111
Figure BDA0004167978900000112
Figure BDA0004167978900000113
wherein h is tar ∈R 1×32 A first state coding sequence obtained after coding the target running vehicle; encoder1 is a code for a first history state sequence S tar An encoder for encoding; the input feature number of the GRU in the Encoder1 is four, namely a transverse position feature, a longitudinal position feature, a running speed feature and a yaw angle feature, and the hidden feature number is 32. The time length of the sequence length is T which is the time length of the preset historical observation time period obs The number of layers is 1, and the weight parameter is
Figure BDA0004167978900000114
Bias parameter is +.>
Figure BDA0004167978900000115
Wherein h is nbrs =[h 1 ,h 1 ,…,h N ]∈R N×32 And a second state coding sequence obtained after the target running vehicle is coded. Encoder2 is a code for a second historical state sequence h nbrs An encoder for encoding; the input feature number of the GRU in the Encoder2 is four, namely a transverse position feature, a longitudinal position feature, a running speed feature and a yaw angle feature, and the hidden feature number is 32. The time length of the sequence length is T which is the time length of the preset historical observation time period obs The number of layers is 1, and the weight parameter is
Figure BDA0004167978900000116
Bias parameter is +.>
Figure BDA0004167978900000117
Wherein F is tar ∈R 1×128 And (3) encoding the target running vehicle to obtain a real track encoding sequence. The Encoder3 is used for the true track sequence Truth tar An encoder for encoding; the Encoder3 consists of a gating cycle unit GRU; the input characteristic parameters of GRU in the Encoder3 are two, namely the real transverse position characteristic and the real transverse position characteristic of the target running vehicle Real longitudinal position features, number of hidden features is 128. The sequence length is T which is the time length of the preset future observation time period pred The number of layers is 1, and the weight parameter is
Figure BDA0004167978900000121
Bias parameter is +.>
Figure BDA0004167978900000122
S260, determining characteristic fusion coding parameters of the target running vehicle based on a spatial attention module and a driving intention learning module in a pre-constructed track prediction network model according to the first state coding sequence and the second state coding sequence.
The spatial attention module is used for determining the allocated attention of each target candidate vehicle according to the first state coding sequence and the second state coding sequence so as to perform feature interaction fusion on the target running vehicle and the target candidate vehicles.
In an alternative embodiment, determining the feature fusion encoding parameters of the target driving vehicle based on the spatial attention module and the driving intention learning module according to the first state encoding sequence and the second state encoding sequence includes: according to the first state coding sequence and the second state coding sequence, determining an interaction characteristic sequence of the target running vehicle based on a preset normalization function in the spatial attention module; and determining characteristic fusion coding parameters of the target driving vehicle based on a preset splicing function in the driving intention learning module according to the first state coding sequence and the interaction characteristic sequence.
The spatial attention module may, for example, utilize a first state code sequence h tar And a second state code sequence h nbrs An attention weight of the target traveling vehicle for each target candidate vehicle is determined. The following manner is adopted for determining the attention weight of each target candidate vehicle:
α nbrs =softmax(h tar e h nbrs );
wherein alpha is nbrs =[α 12 ,…,α n ]∈R N×1 Storing the attention weights of the target running vehicles to all target candidate vehicles; softmax () is a normalization function preset in the spatial attention module; e represents the dot product operation between vectors.
Illustratively, attention weights are assigned to corresponding target candidate vehicles, and finally the interaction feature sequence att of the target running vehicle is obtained nbrs ∈R 1×32
att nbrs =α nbrs ×h nbrs
Exemplary, the driving intent learning module encodes the first state code sequence h using a preset stitching function tar And interaction characteristic sequence att nbrs Splicing to obtain a feature fusion coding parameter H between a first state coding sequence and an interaction feature sequence of the target driving vehicle, wherein the determination mode of the feature fusion coding parameter H can be as follows:
H=concat(h tar ,att nbrs ;2);
wherein H is E R 1×64 ,concat(h tar ,att nbrs The method comprises the steps of carrying out a first treatment on the surface of the 2) Characterization will be h tar And att nbrs Stitching is performed in a second dimension.
S270, determining time domain fusion coding parameters of the target driving vehicle based on the driving intention learning module according to the feature fusion coding parameters and the real track coding sequence.
The driving intention learning module splices the feature fusion coding parameter H and the real track coding sequence to obtain a time domain fusion coding parameter T of the target driving vehicle, where a determination manner of the time domain fusion coding parameter T may be as follows:
T=concat(H,F tar ;2);
wherein T is E R 1×19 2,concat(H,F tar The method comprises the steps of carrying out a first treatment on the surface of the 2) Characterization will be H and F tar Stitching is performed in a second dimension.
S280, determining driving intention vector parameters based on the driving intention learning module according to the time domain fusion coding parameters.
The driving intention learning module can be provided with at least two full-connection layers, the time domain fusion coding parameters and the characteristic fusion coding parameters are respectively used as the input of the corresponding full-connection layers, the weight parameters of the corresponding full-connection layers are trained, the output result of the full-connection layers is obtained, and the output result is determined to be the driving intention vector parameters.
In an alternative embodiment, determining driving intent vector parameters based on the driving intent learning module according to the temporal fusion encoding parameters includes: determining a first preset driving intention probability based on a first preset network structure in the driving intention learning module according to the time domain fusion coding parameters; determining a first target driving intention distribution based on a classification distribution function preset in the driving intention learning module according to a first preset driving intention probability; vector sampling processing is carried out on the first target driving intention distribution under the preset dimension, and driving intention vector parameters of the preset dimension are obtained.
The first preset network structure may be a full connection layer network structure. For example, the time-domain fusion encoding parameter may be input into a full-connection layer network structure to obtain a first preset driving intention probability. The first preset driving probability q is determined as follows:
Figure BDA0004167978900000141
wherein q= [ pi ] 12 ,…,π K ]∈R 1×K
Figure BDA0004167978900000142
K epsilon {1,2, …,15} is a probability vector, each element in the probability vector is a probability value, and K is considered to be a set super-parameter and corresponds to the driving intention in K; phi is an embedded function of the network structure of the full connection layer; />
Figure BDA0004167978900000143
Weight parameter for full connection layer network structure, < ->
Figure BDA0004167978900000144
Is a paranoid parameter of the full connection layer.
Illustratively, the first target driving intent distribution is determined based on a classification distribution function preset in the driving intent learning module according to a first preset driving intent probability q. The classification distribution function may be preset by a skilled person, for example, the classification distribution function may be OneHot (one-hot encoding).
Specifically, a OneHot classification distribution Q (Z) with Q as the probability can be created q I T), i.e. the first target driving intent profile, denoted Z q :Cat(π 12 ,…,π K ) Wherein Cat () is a preset classification distribution function.
Figure BDA0004167978900000145
Wherein (1) >
Figure BDA0004167978900000146
Figure BDA0004167978900000147
Wherein (1)>
Figure BDA0004167978900000148
d∈{1,2,…,K}。
Exemplary, for the first target driving intent distribution Q (Z q And I T) carrying out vector sampling processing under the preset dimension to obtain driving intention vector parameters of the preset dimension. Specifically, a preset dimension is set as a K-dimensional identity matrix, and the matrix is determined from Q (Z q T) samples a vector
Figure BDA0004167978900000149
And vector obtained by acquisition +.>
Figure BDA00041679789000001410
As a driving intent vector parameter; wherein I is K×K And the unit matrix is a preset dimension K-dimensional unit matrix.
S290, determining driving intention output parameters based on an output decoding module in a pre-constructed track prediction network model according to the driving intention vector parameters and the feature fusion coding parameters.
Exemplary, the driving intention vector parameters may be
Figure BDA0004167978900000151
And the characteristic fusion coding parameters H are spliced to obtain the historical state characteristics, the interaction characteristics and the target fusion characteristics among the driving intention vectors of the target driving vehicle. The target fusion feature D is determined as follows:
Figure BDA0004167978900000152
wherein D is E R K×(K+64)
Figure BDA0004167978900000153
Indicating H and +.>
Figure BDA0004167978900000154
Stitching is performed in a second dimension.
Wherein the output decoding module may comprise at least one decoder for performing a decoding operation on input parameters, such as the target fusion feature. Wherein the decoder is composed of a gate-controlled loop unit GRU. For example, the output module may decode the target fusion feature D as follows:
Figure BDA0004167978900000155
Wherein,,
Figure BDA00041679789000001514
the Decoder is a Decoder for decoding the target fusion feature D; the decoder consists of a gating circulation unit GRU, wherein the input characteristic number of the GRU in the decoder is K+64, the hidden characteristic number is 32, and the sequence length is the time length T of a preset future observation period pred The layer number isLayer 1->
Figure BDA0004167978900000156
For the weight parameter of the decoder, +.>
Figure BDA0004167978900000157
Is a paranoid parameter of the decoder.
Illustratively, assume a predicted trajectory
Figure BDA0004167978900000158
The transverse and longitudinal position coordinates of the target driving vehicle at the respective prediction time>
Figure BDA0004167978900000159
l∈{1,2,…,T pred Obeying a binary gaussian mixture distribution consisting of K independent binary gaussian distributions, dividing h into D And respectively obtaining three output parameters, namely driving intention output parameters, through the three full-connection layers. Wherein, the three driving intention output parameters respectively adopt mu D 、σ D And cor D And (3) representing. Wherein the driving intention outputs a parameter mu D 、σ D And cor D The determination mode of (2) is as follows:
Figure BDA00041679789000001510
Figure BDA00041679789000001511
Figure BDA00041679789000001512
wherein,,
Figure BDA00041679789000001513
representing a mean value sequence of the X and Y coordinates of K independent binary Gaussian distribution of the prediction track; />
Figure BDA0004167978900000161
Can represent an embedded function of the fully connected layer, < >>
Figure BDA0004167978900000162
Can represent a full connection layer->
Figure BDA0004167978900000163
Weight parameters of (2); ,/>
Figure BDA0004167978900000164
Can represent a full connection layer->
Figure BDA0004167978900000165
Is used for the bias parameters of the (a).
Wherein,,
Figure BDA0004167978900000166
variance sequence representing the abscissa of the independent binary gaussian distribution of the predicted trajectory K +. >
Figure BDA0004167978900000167
Can represent an embedded function of the fully connected layer, < >>
Figure BDA0004167978900000168
Can represent a full connection layer->
Figure BDA0004167978900000169
Weight parameters of (2); ,/>
Figure BDA00041679789000001610
Can represent a full connection layer->
Figure BDA00041679789000001611
Is used for the bias parameters of the (a).
Wherein,,
Figure BDA00041679789000001612
representing the predicted trajectories K independently of each otherCorrelation coefficient sequence of the abscissa and the ordinate of binary Gaussian distribution, < >>
Figure BDA00041679789000001613
Can represent an embedded function of the fully connected layer, < >>
Figure BDA00041679789000001614
Can represent a full connection layer->
Figure BDA00041679789000001615
Weight parameters of (2); ,
Figure BDA00041679789000001616
can represent a full connection layer->
Figure BDA00041679789000001617
Is used for the bias parameters of the (a).
And S2100, respectively adjusting weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until a preset training ending condition is met, so as to obtain a target vehicle track prediction model.
As shown in fig. 2B, a schematic diagram of a model structure of a track prediction network model is shown, in which corresponding encoders are respectively disposed in the input encoding modules, and each encoder is composed of a gate-control loop unit GRU. Respectively a first history state sequence S of the target running vehicle tar Second history state sequence Nbrs and true track sequence Truth tar And inputting the encoded data into GRU of corresponding encoder. Obtaining a first historical state sequence S tar Encoded first state code sequence h tar Second state coding sequence h after Nbrs coding of second history state sequence nbrs True track sequence Truth tar Coded true track coding sequence F tar . Coding the sequence h by the spatial attention module according to the first state tar And a second historical state sequence h nbrs Feature fusion is carried out to obtain an interactive feature sequence att nbrs . Traffic by driving intention learning moduleMutual characteristic sequence att nbrs And a first state code sequence h tar Splicing to obtain a characteristic fusion coding parameter H and a real track coding sequence F tar And splicing to obtain the time domain fusion coding parameter T. Through a preset full-connection layer network structure in the driving intention learning module, respectively inputting the characteristic fusion coding parameter H and the time domain fusion coding parameter T into corresponding full-connection layers to respectively obtain corresponding preset driving probabilities p and q, sampling the preset driving probability q, and sampling the obtained driving intention vector parameters
Figure BDA00041679789000001618
Splicing the characteristic fusion coding parameter H, inputting the characteristic fusion coding parameter H into an output decoding module, and decoding the characteristic fusion coding parameter H by the output decoding module after decoding operation D Respectively inputting into the three full-connection layers to obtain driving intention output parameters mu respectively output by the three full-connection layers D 、σ D And cor D
In the model training process of the track prediction network model, the weight parameters and the bias parameters are continuously updated and iterated in the training process in the input coding module, the spatial attention module, the driving intention learning module and the output coding module so as to continuously adjust and optimize the model until the set training ending condition is met, and the training of the track prediction network model is ended. The training ending condition may be that a set iteration number threshold is reached. In order to further improve the training accuracy of the track prediction model, the training ending condition can be further optimized.
In an alternative embodiment, according to the driving intention output parameter, weight parameters in the input encoding module, the spatial attention module, the driving intention learning module and the output encoding module are respectively adjusted until a preset training ending condition is met, so as to obtain a target vehicle track prediction model, which includes: determining a second preset driving intention probability based on a second preset network structure in the driving intention learning module according to the feature fusion coding parameters; determining a second target driving intention distribution based on a classification distribution function preset in the driving intention learning module according to a second preset driving intention probability; determining a dispersion between the first target driving intent distribution and the second target driving intent distribution; determining a target loss value according to the dispersion, the driving intention output parameter and the first preset driving intention probability; and respectively adjusting the weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until the target loss value meets the preset loss value judgment condition, so as to obtain a target vehicle track prediction model.
The second preset network structure may be a full connection layer network structure. The feature fusion coding parameters are input to a full-connection layer network structure to obtain a second preset driving intention probability.
The second preset driving intention probability p may be determined as follows:
Figure BDA0004167978900000171
wherein H is a characteristic fusion coding parameter,
Figure BDA0004167978900000172
Figure BDA0004167978900000173
k epsilon {1,2, …,15} is a probability vector, each element in the probability vector is a probability value, and K is considered to be a set super-parameter and corresponds to the driving intention in K; phi is an embedded function of the network structure of the full connection layer; />
Figure BDA0004167978900000181
Weight parameter for full connection layer network structure, < ->
Figure BDA0004167978900000182
Is a paranoid parameter of the full connection layer.
Illustratively, the second target driving intention distribution is determined based on a classification distribution function preset in the driving intention learning module according to the second preset driving intention probability p. The classification distribution function may be preset by a skilled person, for example, the classification distribution function may be OneHot (one-hot encoding).
Specifically, a OneHot classification distribution P (Z) p I H), i.e., second target driving intent distribution, noted as
Figure BDA0004167978900000183
Wherein Cat () is a preset classification distribution function. / >
Figure BDA0004167978900000184
Wherein (1)>
Figure BDA0004167978900000185
Figure BDA0004167978900000186
Wherein,,
Figure BDA0004167978900000187
c∈{1,2,…,K}。
for example, a dispersion between the first target driving intent distribution and the second target driving intent distribution is determined, wherein the dispersion may be determined as follows:
M=KL(Q(Z q |T)||P(Z p |H));
wherein KL is a function of KL divergence (Kullback-Leibler divergence) used to calculate Q (Z) q I T) and P (Z p KL dispersion of differences between H).
Illustratively, the target loss value is determined based on the dispersion, the driving intent output parameter, and the first preset driving intent probability. The specific target Loss value Loss is determined as follows:
Loss=M-logP(Truth tarDD ,cor D ,q);
wherein M is KL dispersion, -logP (Tru)th tarDD ,cor D Q) is used to represent the negative log likelihood function between the future true trajectory and the predicted trajectory of the target traveling vehicle. Wherein Truth is tar Representing a real track sequence of a target running vehicle; mu (mu) DD ,cor D For the driving intention output parameter, q is a first preset driving intention probability.
Illustratively, whether a preset loss value judgment condition is satisfied is judged according to the target loss value, so that a target vehicle track prediction model is obtained. For example, if the target loss value reaches the set loss value threshold, the target loss value may be considered to satisfy the preset loss value judgment condition, and the model iterative training is terminated to obtain the target vehicle track prediction model. Optionally, in the process of training the track prediction model, an Adam optimizer may be used to train the weight parameters and the bias parameters in the model, the learning rate may be set to 0.0001, and when the target loss value reaches the minimum value (or the loss value threshold is set), the corresponding weight parameters and bias parameters are saved, so as to obtain the target vehicle track prediction model.
According to the technical scheme, a first historical state sequence, a second historical state sequence and a real track sequence are input to an input coding module to be subjected to sequence coding processing, so that a first state coding sequence, a second state coding sequence and a real track coding sequence are obtained; determining characteristic fusion coding parameters of the target driving vehicle based on the spatial attention module and the driving intention learning module according to the first state coding sequence and the second state coding sequence; determining time domain fusion coding parameters of the target driving vehicle based on the driving intention learning module according to the characteristic fusion coding parameters and the real track coding sequence; determining driving intention vector parameters based on a driving intention learning module according to the time domain fusion coding parameters; according to the driving intention vector parameters and the feature fusion coding parameters, determining driving intention output parameters based on an output decoding module; and respectively adjusting weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until a preset training ending condition is met, so as to obtain a target vehicle track prediction model. According to the technical scheme, the model training accuracy of the track prediction model is improved, so that a more accurate target vehicle track prediction model can be obtained, the prediction accuracy of the vehicle track prediction by using the target vehicle track prediction model in the follow-up process is improved, and the prediction reliability and accuracy of the vehicle track prediction in the follow-up process are improved.
It should be noted that, to further refine the target vehicle track prediction model, a network model for performing the test may also be constructed in advance. A schematic diagram of the model structure of a trace-prediction test network model is shown in fig. 2C. The track prediction test network model comprises four module structures, namely an input coding module, a spatial attention module, a driving intention judging module and an output decoding module.
During testing, the input encoding module uses the encoder to encode the first historical state sequence as S tar And the historical state sequence Nbrs are respectively encoded to obtain a first state encoding sequence h tar ∈R 1×32 And a second state code sequence h nbrs ∈R N×32
During testing, the spatial attention module encodes a first state code sequence h ta r and a second state code sequence h nbrs As module input parameters, the specific operation steps are the same as the operation steps of training the spatial attention module in the network model, and this embodiment will not be repeated, and finally an interactive feature sequence att is obtained nbrs ∈R 1×32
During testing, the driving intention judging module utilizes the first state coding sequence h tar And interaction characteristic sequence att nbrs And judging the driving intention of the target driving vehicle. Encode the first state into the sequence h tar And interaction characteristic sequence att nbrs Splicing to obtain fusion coding parameters H epsilon R 1×64 The method comprises the steps of carrying out a first treatment on the surface of the H is passed through a full connection layer to obtain
Figure BDA0004167978900000201
Creating one OneHot classification distribution P (Z) p I H), noted ∈j>
Figure BDA0004167978900000202
Wherein Cat () is a preset classification distribution function. />
Figure BDA0004167978900000203
Wherein (1)>
Figure BDA0004167978900000204
Figure BDA0004167978900000205
Wherein (1)>
Figure BDA0004167978900000206
c.epsilon.1, 2,. K. According to the maximum probability value in p->
Figure BDA0004167978900000207
At P (Z) p Sample the corresponding vector +.>
Figure BDA0004167978900000208
Record maximum probability value->
Figure BDA0004167978900000209
The subscript of (2) is index, then +.>
Figure BDA00041679789000002010
The vector corresponds to the most likely driving intention of the target driving vehicle, H and +.>
Figure BDA00041679789000002011
Splicing to obtain fusion characteristic parameters G:
Figure BDA00041679789000002012
wherein G is E R 1×(K+64)
Figure BDA00041679789000002013
Is to add H and->
Figure BDA00041679789000002014
Stitching is performed in a second dimension.
During testing, the output decoding module decodes the fusion characteristic parameter G by using a decoder to obtain a decoded characteristic parameter h G
Figure BDA00041679789000002015
Wherein,,
Figure BDA00041679789000002016
the Decoder is a Decoder, the Decoder is composed of gate control circulation units GRU, the input feature number of the GRU in the Decoder is K+64, the hidden feature number is 32, and the sequence length is T pred The number of layers is 1->
Figure BDA00041679789000002017
For the weight parameter of the decoder,, is->
Figure BDA00041679789000002018
Is a bias parameter for the decoder. />
Will characteristic parameter h G Mu is obtained by a full connection layer G
Figure BDA0004167978900000211
Wherein,,
Figure BDA0004167978900000212
is a predictive trajectory obeying binary Gaussian mixture distribution +.>
Figure BDA0004167978900000213
Figure BDA0004167978900000214
Is sitting on the seat in the horizontal and vertical directionsA standard value sequence; />
Figure BDA0004167978900000215
Is an embedded function of the full connection layer, +. >
Figure BDA0004167978900000216
Is the weight parameter of the full connection layer, +.>
Figure BDA0004167978900000217
Is the bias parameter of the fully connected layer.
When predicting the vehicle running track by using the target vehicle track prediction model, the weight parameter and the bias parameter obtained at the end of model training are led into the network at the time of test, and the first history state sequence S is input tar And a second historical state sequence Nbrs, wherein the network model can output an average value sequence mu of the horizontal coordinates and the vertical coordinates of the track most likely to be driven by the target driving vehicle in the future G
Example III
Fig. 3 is a schematic structural diagram of a training device for a vehicle track prediction model according to a third embodiment of the present invention. The device for training the vehicle track prediction model provided by the embodiment of the invention can be suitable for the situations of training the vehicle track prediction model and predicting the vehicle track by adopting the model obtained by training, and the device for training the vehicle track prediction model can be realized in a hardware and/or software mode, as shown in fig. 3, and specifically comprises the following steps: a state data acquisition module 301, a target vehicle determination module 302, a candidate vehicle determination module 303, a state sequence determination module 304, and a predictive model training module 305. Wherein,,
A state data acquisition module 301, configured to acquire vehicle state data of each traveling vehicle in a preset traveling area in a preset time period;
a target vehicle determining module 302, configured to select any one of the running vehicles as a target running vehicle; wherein the other running vehicles except the target running vehicle are reference running vehicles;
a candidate vehicle determination module 303 for determining a candidate vehicle that is adjacent to the target vehicle from among the reference vehicles according to a position distance between each of the reference vehicles and the target vehicle;
a state sequence determining module 304, configured to determine, according to vehicle state data corresponding to the target running vehicle and the candidate running vehicle, a real track sequence of the target running vehicle in a preset future observation period, and determine a first historical state sequence and a second historical state sequence of the target running vehicle in a preset historical observation period;
the prediction model training module 305 is configured to input the first historical state sequence, the second historical state sequence, and the real track sequence corresponding to the target driving vehicle into a track prediction network model that is built in advance, and perform model training on the track prediction network model to obtain a target vehicle track prediction model, which is used for predicting a vehicle driving track.
According to the technical scheme, any running vehicle is selected from all running vehicles to serve as a target running vehicle, a reference running vehicle is determined, and a candidate running vehicle adjacent to the target running vehicle is determined according to the position distance between each reference running vehicle and the target running vehicle; according to the vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively, determining a real track sequence of the target running vehicle, and determining a first historical state sequence and a second historical state sequence of the target running vehicle, accurate construction of a sample training set for training a vehicle track prediction model is achieved, interaction among vehicles is comprehensively reflected in the sample training set, the situation that the training accuracy of the vehicle track prediction model is low due to the fact that the sample training set is too single, and the track result is relatively single is avoided, the track prediction network model is trained in a model by inputting the first historical state sequence, the second historical state sequence and the real track sequence into a track prediction network model constructed in advance, the target vehicle track prediction model is obtained, training accuracy of the track prediction network model is achieved, and the follow-up prediction reliability and accuracy for vehicle track prediction are improved.
Optionally, the trajectory prediction network model includes an input encoding module, a spatial attention module connected to an output end of the input encoding module, a driving intention learning module connected to an output end of the spatial attention module, and an output decoding module connected to an output end of the driving intention learning module.
Optionally, the prediction model training module 305 includes:
the coding sequence determining unit is used for inputting the first historical state sequence, the second historical state sequence and the real track sequence into the input coding module to perform sequence coding processing to obtain a first state coding sequence, a second state coding sequence and a real track coding sequence;
a feature coding parameter determining unit, configured to determine a feature fusion coding parameter of the target driving vehicle based on the spatial attention module and the driving intention learning module according to the first state coding sequence and the second state coding sequence;
the time domain coding parameter determining unit is used for determining the time domain fusion coding parameter of the target driving vehicle based on the driving intention learning module according to the characteristic fusion coding parameter and the real track coding sequence;
The intention vector parameter determining unit is used for determining driving intention vector parameters based on the driving intention learning module according to the time domain fusion coding parameters;
an intention output parameter determining unit for determining a driving intention output parameter based on the output decoding module according to the driving intention vector parameter and the feature fusion encoding parameter;
and the weight parameter adjusting unit is used for respectively adjusting the weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until a preset training ending condition is met, so as to obtain a target vehicle track prediction model.
Optionally, the feature coding parameter determining unit includes:
the interactive feature sequence determining subunit is used for determining the interactive feature sequence of the target running vehicle based on a preset normalization function in the spatial attention module according to the first state coding sequence and the second state coding sequence;
and the characteristic coding parameter determining subunit is used for determining characteristic fusion coding parameters of the target driving vehicle based on a preset splicing function in the driving intention learning module according to the first state coding sequence and the interaction characteristic sequence.
Optionally, the intention vector parameter determining unit includes:
the first intention probability determination subunit is used for determining a first preset driving intention probability based on a first preset network structure in the driving intention learning module according to the time domain fusion coding parameter;
a first intention distribution determining subunit, configured to determine a first target driving intention distribution based on a classification distribution function preset in the driving intention learning module according to the first preset driving intention probability;
and the intention vector parameter determination subunit is used for carrying out vector sampling processing on the first target driving intention distribution in a preset dimension to obtain driving intention vector parameters in the preset dimension.
Optionally, the weight parameter adjusting unit includes:
a second intention probability determining subunit, configured to determine a second preset driving intention probability based on a second preset network structure in the driving intention learning module according to the feature fusion encoding parameter;
a second intention distribution determining subunit, configured to determine a second target driving intention distribution based on a classification distribution function preset in the driving intention learning module according to the second preset driving intention probability;
A dispersion determining subunit configured to determine a dispersion between the first target driving intention distribution and the second target driving intention distribution;
a target loss value determining subunit, configured to determine a target loss value according to the dispersion, the driving intention output parameter, and the first preset driving intention probability;
and the weight parameter adjustment subunit is used for respectively adjusting the weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until the target loss value meets the preset loss value judgment condition, so as to obtain a target vehicle track prediction model.
Optionally, the state sequence determining module 304 includes:
a first state sequence determining unit, configured to construct a first historical state sequence of the target running vehicle in a preset historical observation time period according to the vehicle state data of the target running vehicle;
a target candidate vehicle selecting unit configured to select a target candidate vehicle that matches the preset historical observation period from among candidate running vehicles that are adjacent to the target running vehicle;
A second state sequence determining unit, configured to construct a historical state sequence of the target candidate vehicle in a preset historical observation time period according to vehicle state data of the target candidate vehicle, and take the historical state sequence of the target candidate vehicle as a second historical state sequence of the target running vehicle;
the real track sequence construction unit is used for constructing a real track sequence of the target running vehicle in a preset future observation time period according to the vehicle state data of the target running vehicle.
The vehicle track prediction model training device provided by the embodiment of the invention can execute the vehicle track prediction model training method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 shows a schematic diagram of an electronic device 40 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 40 includes at least one processor 41, and a memory communicatively connected to the at least one processor 41, such as a Read Only Memory (ROM) 42, a Random Access Memory (RAM) 43, etc., in which the memory stores a computer program executable by the at least one processor, and the processor 41 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 42 or the computer program loaded from the storage unit 48 into the Random Access Memory (RAM) 43. In the RAM 43, various programs and data required for the operation of the electronic device 40 may also be stored. The processor 41, the ROM 42 and the RAM 43 are connected to each other via a bus 44. An input/output (I/O) interface 45 is also connected to bus 44.
Various components in electronic device 40 are connected to I/O interface 45, including: an input unit 46 such as a keyboard, a mouse, etc.; an output unit 47 such as various types of displays, speakers, and the like; a storage unit 48 such as a magnetic disk, an optical disk, or the like; and a communication unit 49 such as a network card, modem, wireless communication transceiver, etc. The communication unit 49 allows the electronic device 40 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 41 may be various general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 41 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 41 performs the various methods and processes described above, such as the vehicle trajectory prediction model training method.
In some embodiments, the vehicle trajectory prediction model training method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 48. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 40 via the ROM 42 and/or the communication unit 49. When the computer program is loaded into RAM 43 and executed by processor 41, one or more steps of the vehicle trajectory prediction model training method described above may be performed. Alternatively, in other embodiments, processor 41 may be configured to perform the vehicle trajectory prediction model training method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A vehicle trajectory prediction model training method, comprising:
acquiring vehicle state data of each traveling vehicle in a preset traveling area in a preset time period;
selecting any one of the running vehicles as a target running vehicle; wherein the other running vehicles except the target running vehicle are reference running vehicles;
determining a candidate running vehicle adjacent to the target running vehicle from among the reference running vehicles according to a position distance between each of the reference running vehicles and the target running vehicle;
According to the vehicle state data respectively corresponding to the target running vehicle and the candidate running vehicle, determining a real track sequence of the target running vehicle in a preset future observation period, and determining a first historical state sequence and a second historical state sequence of the target running vehicle in a preset historical observation period;
inputting the first historical state sequence, the second historical state sequence and the real track sequence corresponding to the target running vehicle into a track prediction network model constructed in advance, and performing model training on the track prediction network model to obtain a target vehicle track prediction model for predicting the running track of the vehicle.
2. The method of claim 1, wherein the trajectory prediction network model comprises an input encoding module, a spatial attention module coupled to an output of the input encoding module, a driving intent learning module coupled to an output of the spatial attention module, and an output decoding module coupled to an output of the driving intent learning module.
3. The method according to claim 2, wherein the inputting the first historical state sequence, the second historical state sequence and the real track sequence corresponding to the target traveling vehicle into a track prediction network model constructed in advance, and performing model training on the track prediction network model to obtain a target vehicle track prediction model includes:
Inputting the first historical state sequence, the second historical state sequence and the real track sequence into the input coding module for sequence coding processing to obtain a first state coding sequence, a second state coding sequence and a real track coding sequence;
determining feature fusion coding parameters of the target driving vehicle based on the spatial attention module and the driving intention learning module according to the first state coding sequence and the second state coding sequence;
determining time domain fusion coding parameters of the target driving vehicle based on the driving intention learning module according to the characteristic fusion coding parameters and the real track coding sequence;
determining driving intention vector parameters based on the driving intention learning module according to the time domain fusion coding parameters;
determining driving intention output parameters based on the output decoding module according to the driving intention vector parameters and the characteristic fusion coding parameters;
and respectively adjusting weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until a preset training ending condition is met, so as to obtain a target vehicle track prediction model.
4. A method according to claim 3, wherein said determining feature fusion encoding parameters of the target traveling vehicle based on the spatial attention module and the driving intention learning module according to the first state encoding sequence and the second state encoding sequence comprises:
according to the first state coding sequence and the second state coding sequence, determining an interaction characteristic sequence of the target running vehicle based on a preset normalization function in the spatial attention module;
and determining feature fusion coding parameters of the target driving vehicle based on a preset splicing function in the driving intention learning module according to the first state coding sequence and the interaction feature sequence.
5. A method according to claim 3, wherein said determining driving intent vector parameters based on said driving intent learning module according to said time-domain fusion coding parameters comprises:
determining a first preset driving intention probability based on a first preset network structure in the driving intention learning module according to the time domain fusion coding parameters;
determining a first target driving intention distribution based on a classification distribution function preset in the driving intention learning module according to the first preset driving intention probability;
Vector sampling processing is conducted on the first target driving intention distribution under a preset dimension, and driving intention vector parameters of the preset dimension are obtained.
6. The method according to claim 5, wherein the adjusting the weight parameters in the input encoding module, the spatial attention module, the driving intention learning module, and the output encoding module according to the driving intention output parameters until the preset training end condition is satisfied, respectively, to obtain a target vehicle track prediction model includes:
determining a second preset driving intention probability based on a second preset network structure in the driving intention learning module according to the characteristic fusion coding parameters;
determining a second target driving intention distribution based on a classification distribution function preset in the driving intention learning module according to the second preset driving intention probability;
determining a dispersion between the first target driving intent distribution and the second target driving intent distribution;
determining a target loss value according to the dispersion, the driving intention output parameter and the first preset driving intention probability;
and respectively adjusting weight parameters in the input coding module, the spatial attention module, the driving intention learning module and the output coding module according to the driving intention output parameters until the target loss value meets the preset loss value judgment condition, so as to obtain a target vehicle track prediction model.
7. The method according to any one of claims 1 to 6, wherein the determining a real track sequence of the target traveling vehicle under a preset future observation period and determining a first history state sequence and a second history state sequence of the target traveling vehicle under a preset history observation period according to vehicle state data corresponding to the target traveling vehicle and the candidate traveling vehicle, respectively, includes:
according to the vehicle state data of the target running vehicle, constructing a first historical state sequence of the target running vehicle in a preset historical observation time period;
selecting a target candidate vehicle matched with the preset historical observation time period from candidate running vehicles adjacent to the target running vehicle;
according to the vehicle state data of the target candidate vehicle, constructing a history state sequence of the target candidate vehicle in a preset history observation time period, and taking the history state sequence of the target candidate vehicle as a second history state sequence of the target running vehicle;
and constructing a real track sequence of the target running vehicle in a preset future observation time period according to the vehicle state data of the target running vehicle.
8. A vehicle trajectory prediction model training device, characterized by comprising:
the state data acquisition module is used for acquiring vehicle state data of each running vehicle in a preset running area in a preset time period;
the target vehicle determining module is used for selecting any running vehicle from the running vehicles as a target running vehicle; wherein the other running vehicles except the target running vehicle are reference running vehicles;
a candidate vehicle determination module configured to determine a candidate vehicle that is adjacent to the target vehicle from among the reference vehicles according to a position distance between each of the reference vehicles and the target vehicle;
the state sequence determining module is used for determining a real track sequence of the target running vehicle in a preset future observation period according to vehicle state data corresponding to the target running vehicle and the candidate running vehicle respectively, and determining a first historical state sequence and a second historical state sequence of the target running vehicle in a preset historical observation period;
the prediction model training module is used for inputting the first historical state sequence, the second historical state sequence and the real track sequence corresponding to the target running vehicle into a track prediction network model constructed in advance, and carrying out model training on the track prediction network model to obtain a target vehicle track prediction model which is used for predicting the running track of the vehicle.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the vehicle trajectory prediction model training method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the vehicle trajectory prediction model training method of any one of claims 1-7 when executed.
CN202310368938.8A 2023-04-07 2023-04-07 Vehicle track prediction model training method, device, equipment and storage medium Pending CN116401549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310368938.8A CN116401549A (en) 2023-04-07 2023-04-07 Vehicle track prediction model training method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310368938.8A CN116401549A (en) 2023-04-07 2023-04-07 Vehicle track prediction model training method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116401549A true CN116401549A (en) 2023-07-07

Family

ID=87015564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310368938.8A Pending CN116401549A (en) 2023-04-07 2023-04-07 Vehicle track prediction model training method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116401549A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116662788A (en) * 2023-07-27 2023-08-29 太平金融科技服务(上海)有限公司深圳分公司 Vehicle track processing method, device, equipment and storage medium
CN117094452A (en) * 2023-10-20 2023-11-21 浙江天演维真网络科技股份有限公司 Drought state prediction method, and training method and device of drought state prediction model
CN117114352A (en) * 2023-09-15 2023-11-24 北京阿帕科蓝科技有限公司 Vehicle maintenance method, device, computer equipment and storage medium
CN117492447A (en) * 2023-12-28 2024-02-02 苏州元脑智能科技有限公司 Method, device, equipment and storage medium for planning driving track of automatic driving vehicle

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116662788A (en) * 2023-07-27 2023-08-29 太平金融科技服务(上海)有限公司深圳分公司 Vehicle track processing method, device, equipment and storage medium
CN116662788B (en) * 2023-07-27 2024-04-02 太平金融科技服务(上海)有限公司深圳分公司 Vehicle track processing method, device, equipment and storage medium
CN117114352A (en) * 2023-09-15 2023-11-24 北京阿帕科蓝科技有限公司 Vehicle maintenance method, device, computer equipment and storage medium
CN117114352B (en) * 2023-09-15 2024-04-09 北京阿帕科蓝科技有限公司 Vehicle maintenance method, device, computer equipment and storage medium
CN117094452A (en) * 2023-10-20 2023-11-21 浙江天演维真网络科技股份有限公司 Drought state prediction method, and training method and device of drought state prediction model
CN117094452B (en) * 2023-10-20 2024-02-06 浙江天演维真网络科技股份有限公司 Drought state prediction method, and training method and device of drought state prediction model
CN117492447A (en) * 2023-12-28 2024-02-02 苏州元脑智能科技有限公司 Method, device, equipment and storage medium for planning driving track of automatic driving vehicle
CN117492447B (en) * 2023-12-28 2024-03-26 苏州元脑智能科技有限公司 Method, device, equipment and storage medium for planning driving track of automatic driving vehicle

Similar Documents

Publication Publication Date Title
CN116401549A (en) Vehicle track prediction model training method, device, equipment and storage medium
CN115879535A (en) Training method, device, equipment and medium for automatic driving perception model
CN113129870B (en) Training method, device, equipment and storage medium of speech recognition model
CN110462638B (en) Training neural networks using posterior sharpening
US20200151545A1 (en) Update of attenuation coefficient for a model corresponding to time-series input data
CN112070781A (en) Processing method and device of craniocerebral tomography image, storage medium and electronic equipment
CN113870334B (en) Depth detection method, device, equipment and storage medium
CN112966744A (en) Model training method, image processing method, device and electronic equipment
CN110414005A (en) Intention recognition method, electronic device, and storage medium
CN113379059A (en) Model training method for quantum data classification and quantum data classification method
CN117521512A (en) Bearing residual service life prediction method based on multi-scale Bayesian convolution transducer model
CN115759748A (en) Risk detection model generation method and device and risk individual identification method and device
CN116244647A (en) Unmanned aerial vehicle cluster running state estimation method
CN111652444B (en) K-means and LSTM-based daily guest volume prediction method
CN117611536A (en) Small sample metal defect detection method based on self-supervision learning
CN109933926B (en) Method and apparatus for predicting flight reliability
CN113095386B (en) Gesture recognition method and system based on triaxial acceleration space-time feature fusion
CN113989899A (en) Method, device and storage medium for determining feature extraction layer in face recognition model
CN114581751B (en) Training method of image recognition model, image recognition method and device
CN116611477B (en) Training method, device, equipment and medium for data pruning method and sequence model
CN114036948B (en) Named entity identification method based on uncertainty quantification
CN116316890A (en) Renewable energy source output scene generation method, device, equipment and medium
CN114973279B (en) Training method and device for handwritten text image generation model and storage medium
US11475296B2 (en) Linear modeling of quality assurance variables
CN116946159A (en) Motion trail prediction method, device, equipment and medium based on dynamic local map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination