CN114997307A - Trajectory prediction method, apparatus, device and storage medium - Google Patents

Trajectory prediction method, apparatus, device and storage medium Download PDF

Info

Publication number
CN114997307A
CN114997307A CN202210610273.2A CN202210610273A CN114997307A CN 114997307 A CN114997307 A CN 114997307A CN 202210610273 A CN202210610273 A CN 202210610273A CN 114997307 A CN114997307 A CN 114997307A
Authority
CN
China
Prior art keywords
track
trajectory
selectable
network
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210610273.2A
Other languages
Chinese (zh)
Inventor
陈红丽
王宁
卢丽婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210610273.2A priority Critical patent/CN114997307A/en
Publication of CN114997307A publication Critical patent/CN114997307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0016Planning or execution of driving tasks specially adapted for safety of the vehicle or its occupants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2530/00Input parameters relating to vehicle conditions or values, not covered by groups B60W2510/00 or B60W2520/00
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4029Pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics

Abstract

The invention discloses a track prediction method, a track prediction device, track prediction equipment and a storage medium. The method comprises the following steps: extracting interactive fusion characteristics between a running track of a vehicle obstacle and map data information of the current position of the vehicle through a characteristic extraction network; determining, by a prediction network, a selectable predicted trajectory of the vehicle obstacle, a trajectory confidence of the selectable predicted trajectory, and an intention confidence of a selectable intention category corresponding to the selectable predicted trajectory according to the interactive fusion features; determining a target predicted trajectory from the selectable predicted trajectories according to the trajectory confidence and the intent confidence of the selectable intent categories. The track confidence coefficient and the intention confidence coefficient are combined to determine the target prediction track for the vehicle barrier, so that the track prediction mode is optimized, the track prediction accuracy is improved, and the safety of vehicle driving is guaranteed.

Description

Trajectory prediction method, apparatus, device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of intelligent transportation, in particular to a track prediction method, a track prediction device, track prediction equipment and a storage medium.
Background
With the development of artificial intelligence technology, automatic driving technology is gradually emerging. And the prediction of the driving track of the dynamic obstacles around the vehicle is the core link of the automatic driving process. Therefore, how to accurately predict the driving trajectory of the dynamic obstacle around the vehicle to improve the safety of automatic driving is a problem that needs to be solved at present.
Disclosure of Invention
The embodiment of the invention provides a track prediction method, a track prediction device, track prediction equipment and a storage medium, wherein the track confidence coefficient is combined with the intention confidence coefficient corresponding to the track to determine a target prediction track for a vehicle obstacle, so that the track prediction accuracy is improved, and the running safety of an automatic driving vehicle is further guaranteed.
In a first aspect, an embodiment of the present invention provides a trajectory prediction method, including:
extracting interactive fusion characteristics between the running track of the vehicle barrier and the map data information of the current position of the vehicle through a characteristic extraction network;
determining, by a prediction network, a selectable predicted trajectory of the vehicle obstacle, a trajectory confidence of the selectable predicted trajectory, and an intention confidence of a selectable intention category corresponding to the selectable predicted trajectory according to the interactive fusion features;
determining a target predicted trajectory from the selectable predicted trajectories according to the trajectory confidence and the intent confidence of the selectable intent categories.
In a second aspect, an embodiment of the present invention further provides a trajectory prediction apparatus, including:
the fusion feature extraction module is used for extracting interactive fusion features between the running track of the vehicle barrier and the map data information of the current position of the vehicle through a feature extraction network;
the information prediction module is used for determining an optional predicted track of the vehicle obstacle, a track confidence coefficient of the optional predicted track and an intention confidence coefficient of an optional intention category corresponding to the optional predicted track according to the interactive fusion characteristics through a prediction network;
and the target track determining module is used for determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the trajectory prediction method as described in any one of the first aspects.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the trajectory prediction method according to any one of the first aspect.
In the embodiment of the invention, the interactive fusion characteristics between the running track of the vehicle barrier and the map data information of the current position of the vehicle are extracted through a characteristic extraction network; determining, by a prediction network, a selectable predicted trajectory of the vehicle obstacle, a trajectory confidence of the selectable predicted trajectory, and an intention confidence of a selectable intention category corresponding to the selectable predicted trajectory according to the interactive fusion features; determining a target predicted trajectory from the selectable predicted trajectories according to the trajectory confidence and the intent confidence of the selectable intent categories. The track confidence coefficient is combined with the intention confidence coefficient corresponding to the track to determine the target prediction track for the vehicle barrier, so that the track prediction mode is optimized, the track prediction accuracy is improved, the problem of low driving safety of the automatic driving vehicle caused by inaccurate track prediction of the barriers around the vehicle is solved, and a new scheme is provided for predicting the track of the vehicle barrier.
Drawings
Fig. 1 is a flowchart of a trajectory prediction method according to an embodiment of the present invention;
fig. 2 is a flowchart of a trajectory prediction method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a trajectory prediction method according to a third embodiment of the present invention;
fig. 4 is a flowchart of constructing a training sample set according to a third embodiment of the present invention;
fig. 5 is a flowchart of a training trajectory prediction model based on a constructed training sample set according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a trajectory prediction apparatus according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a trajectory prediction method according to an embodiment of the present invention, which is applicable to a situation of trajectory prediction, and is particularly applicable to a situation of trajectory prediction of a vehicle obstacle around an autonomous vehicle. The method may be executed by a trajectory prediction apparatus in an embodiment of the present invention, and the apparatus may be implemented in a hardware and/or software manner, as shown in fig. 1, and specifically includes the following steps:
and S110, extracting interactive fusion characteristics between the running track of the vehicle obstacle and the map data information of the current position of the vehicle through a characteristic extraction network.
The feature extraction network may be a network in which feature extraction and fusion are performed on a traveled trajectory of a vehicle obstacle and map data information of a current position of the vehicle. The vehicle obstacle may be other dynamic traffic participants located around the autonomous vehicle, such as pedestrians and/or other vehicles in the vicinity of the autonomous vehicle, and so forth. The traveled trajectory may be a trajectory of a vehicle obstacle that has been traveled, and may include a historical travel trajectory and a current travel trajectory. The map data information of the current position of the vehicle may be electronic map data information of the position of the vehicle, which is obtained according to the vehicle positioning information. Alternatively, the map data information may include, but is not limited to, lane information, traffic intersection information, traffic signal information, traffic congestion information, etc. where the vehicle is traveling. The interactive fusion feature may be an analysis result of fusion interaction between the feature of the traveled trajectory and the feature of the map data information.
Optionally, one implementation manner of extracting the interactive fusion feature between the traveled track of the vehicle obstacle and the map data information of the current position of the vehicle through the feature extraction network may be: and inputting the information of the traveled track and the map data into a feature extraction network, and directly analyzing the input information of the traveled track and the map data by the feature extraction network to determine interactive fusion features between the traveled track and the map data. Another implementation manner may be to input the traveled track and the map data information into a feature extraction network, where the feature extraction network first encodes the traveled track and the map data information according to a certain rule, obtains an encoding result of the traveled track and an encoding result of the map data information, and then performs analysis of the interactive fusion feature based on the encoding results of the traveled track and the map data information, thereby obtaining the interactive fusion feature.
And S120, determining the selectable predicted track of the vehicle obstacle, the track confidence of the selectable predicted track and the intention confidence of the selectable intention category corresponding to the selectable predicted track through a prediction network according to the interactive fusion characteristics.
The prediction network can be a network for predicting the future movement track and the movement intention of the dynamic obstacles around the vehicle. The selectable predicted track can be a future driving track of the vehicle obstacle predicted by the prediction network according to the interactive fusion characteristics, and the number of the selectable predicted tracks can be multiple, namely, one vehicle obstacle can have multiple selectable predicted tracks. The track confidence may indicate the confidence level, i.e., the reliability and the accuracy probability, of the prediction of the selectable predicted track, for example, a numerical value in an interval of 0 to 1, where a larger numerical value indicates a higher confidence level and a higher accuracy of the selectable predicted track. The selectable intent categories may be intents corresponding to selectable predicted trajectories indicating that a vehicle obstacle is traveling based on the selectable predicted trajectories. For example, the selectable intent categories when the vehicle obstacle is a vehicle are: lane keeping, left lane changing, right lane changing, left vehicle inserting, right vehicle inserting, parking, left turning, right turning, turning around and the like; the selectable intent categories when the vehicle obstacle is a pedestrian are: move, stop, cross the road, wait for traffic lights, etc. The intent confidence may be a confidence level indicating the confidence of the prediction of the optional intent category.
Specifically, the present embodiment may input the interactive fusion features into a prediction network, where the prediction network predicts at least one selectable predicted trajectory for each vehicle obstacle based on the interactive fusion features, and predicts at least one selectable intention category for each selectable predicted trajectory, where each predicted selectable predicted trajectory has a corresponding confidence value, that is, a trajectory confidence; each optional intent category for each trajectory also has its corresponding confidence value, i.e., intent confidence.
And S130, determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories.
Optionally, the embodiment may determine, for each vehicle obstacle, a target predicted trajectory from the selectable predicted trajectories thereof, while considering the trajectory confidence of each selectable predicted trajectory and the intention confidence of the selectable intention category corresponding thereto, for each vehicle obstacle.
Optionally, there are many ways to determine the target predicted trajectory from the selectable predicted trajectories according to the trajectory confidence and the intention confidence of the selectable intention category, which is not limited in this embodiment.
One possible implementation is: determining a target confidence degree of the selectable predicted track under the corresponding selectable intention category according to the track confidence degree and the intention confidence degree of the selectable intention category; and determining a target predicted track from the selectable predicted tracks according to the target confidence. Specifically, for each selectable predicted trajectory of each vehicle obstacle, a trajectory confidence of the selectable predicted trajectory and an intention confidence of a corresponding selectable intention category are weighted to obtain a target confidence of the selectable predicted trajectory in each selectable intention category. After the target confidence coefficient of each optional predicted track of each vehicle obstacle in each optional intention category is determined, the optional predicted track corresponding to the highest target confidence coefficient in the optional predicted tracks of the vehicle obstacle is used as the target predicted track of the vehicle obstacle.
Another possible implementation is: for each vehicle obstacle, selecting the selectable predicted track with the track confidence coefficient larger than a preset threshold value from the corresponding selectable predicted tracks as a primary screening track, and selecting the target predicted track with the highest intention confidence coefficient from the primary screening tracks as the vehicle obstacle.
Yet another possible implementation is: and selecting the selectable intention categories with intention confidence degrees larger than a preset threshold value from the corresponding selectable intention categories of each vehicle obstacle as primary screening intents, and selecting the selectable intention categories with the highest track confidence degrees from the primary screening intents as target predicted tracks of the vehicle obstacle. The embodiments of the present invention are not limited in this regard.
According to the technical scheme of the embodiment, interactive fusion features between the running track of the vehicle barrier and the map data information of the current position of the vehicle are extracted through a feature extraction network; determining an optional predicted track of the vehicle obstacle, a track confidence coefficient of the optional predicted track and an intention confidence coefficient of an optional intention category corresponding to the optional predicted track according to the interactive fusion characteristics through a prediction network; and determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories. And combining the track confidence coefficient and the intention confidence coefficient, and weighting the track confidence coefficient by using the intention confidence coefficient so as to determine the target predicted track of the vehicle obstacle and enable the target predicted track to be more in line with the expectation. The method and the device realize accurate prediction of the vehicle obstacle track, optimize the track prediction mode and further improve the safety of vehicle driving.
Optionally, after determining the target predicted trajectory from the selectable predicted trajectories, the method further includes: and determining a target intention category corresponding to the target prediction track.
Specifically, the specific implementation of determining the target intention category corresponding to the target predicted trajectory may be: and determining a target intention category corresponding to the target prediction track according to the corresponding relation between the target prediction track and the target intention category. The automatic driving vehicle can be controlled to run more accurately according to the predicted track and intention of the vehicle obstacle based on the target predicted track and the target intention category, and the running safety of the vehicle is guaranteed.
Example two
Fig. 2 is a flowchart of a trajectory prediction method according to a second embodiment of the present invention, where the present embodiment further optimizes how to acquire a traveled trajectory of a vehicle obstacle and map data information of a current position of a vehicle based on the foregoing embodiments, and the method specifically includes the following steps:
s210, determining the running track of the vehicle obstacle according to the vehicle surrounding environment data acquired by the sensing equipment configured on the vehicle.
Wherein, perception equipment can be the perception module that sets up on the automatic driving vehicle, like camera, lidar, vehicle event data recorder etc..
Specifically, the automatic driving vehicle can acquire surrounding environment data at fixed time intervals through a sensing module on the automatic driving vehicle as sensing data; and analyzing the position information of the vehicle obstacle corresponding to each moment according to the sensing data acquired at each moment, and drawing the motion track of each dynamic obstacle around the vehicle according to the acquisition time of the sensing data and the position information of the corresponding vehicle obstacle, so as to obtain the running track of each vehicle obstacle.
And S220, determining the map data information of the current position of the vehicle according to the high-precision map configured on the vehicle.
The map data information may be information related to road elements marked on the map.
Optionally, the current positioning information of the vehicle, that is, the current position of the vehicle, may be obtained through a positioning module on the autonomous driving vehicle, then the electronic map of the position where the vehicle is located is obtained from the high-precision map according to the current position of the vehicle, and the map data information of the current position of the vehicle is obtained by performing semantic information analysis on road elements in the electronic map.
And S230, extracting interactive fusion characteristics between the running track of the vehicle obstacle and the map data information of the current position of the vehicle through a characteristic extraction network.
S240, determining the selectable predicted track of the vehicle obstacle, the track confidence of the selectable predicted track and the intention confidence of the selectable intention category corresponding to the selectable predicted track through a prediction network according to the interactive fusion characteristics.
And S250, determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories.
According to the technical scheme of the embodiment, the running track of the vehicle obstacle is determined according to the vehicle surrounding environment data acquired by the sensing equipment configured on the vehicle; determining map data information of the current position of the vehicle according to a high-precision map configured on the vehicle; extracting interactive fusion characteristics between a running track of a vehicle obstacle and map data information of the current position of the vehicle through a characteristic extraction network; determining an optional predicted track of the vehicle obstacle, a track confidence coefficient of the optional predicted track and an intention confidence coefficient of an optional intention category corresponding to the optional predicted track according to the interactive fusion characteristics through a prediction network; and determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories. According to the scheme, the surrounding environment data and the map data information nearby are collected in real time according to the equipment equipped on the vehicle, the running track of the vehicle barrier is determined based on the surrounding environment data, the map data information of the current position of the vehicle is obtained through analysis, the validity and the accuracy of the data can be guaranteed, and the follow-up accurate prediction of the track of the vehicle barrier is guaranteed.
EXAMPLE III
Fig. 3 is a flowchart of a trajectory prediction method according to a third embodiment of the present invention, and this embodiment further optimizes how the feature extraction network extracts the interactive fusion features and how the prediction network predicts the operation of the vehicle obstacle trajectory based on the foregoing embodiments.
Optionally, the feature extraction network in this embodiment includes: the track coding sub-network, the road coding sub-network and the fusion interaction sub-network are connected in series;
the prediction network comprises: trajectory prediction sub-networks and intent prediction sub-networks.
The track coding sub-network can be a network which extracts the characteristics of the running track of the vehicle barrier according to a certain rule and algorithm and outputs a track characteristic coding result. The road coding sub-network may be a network that encodes map data information of a high-precision map in which the vehicle is located in a specific coding manner. The fusion interaction sub-network can be a network for interactively fusing the track coding result and the road coding result, and determines the interactive fusion characteristics between the track coding result and the road coding result. The trained trajectory prediction subnetwork may be used to predict a future trajectory of the vehicle obstacle and predict a trajectory confidence level. The trained intent prediction sub-network may be used to predict vehicle obstacle intent and intent confidence. The method specifically comprises the following steps:
s310, determining a track coding result according to the running track of the vehicle obstacle through the track coding sub-network.
The trajectory encoding sub-network may be a Convolutional Neural Network (CNN) trajectory feature extraction network. Specifically, the CNN trajectory Feature extraction network may be formed by a plurality of one-dimensional CNN Networks, the plurality of CNN Networks are in a parallel relationship, and based on a theory that a Feature Pyramid Network (FPN) merges multi-scale extraction features, the traveled trajectories of the vehicle obstacle are respectively input into each CNN network to obtain trajectory features output by each CNN network, and then the trajectory features are merged to obtain trajectory encoding results output by the trajectory Feature extraction network. Optionally, the one-dimensional CNN of this embodiment may be composed of residual blocks. Illustratively, the CNN network layer has a convolution kernel size of 3 and a number of output channels of 128.
Alternatively, the representation of the traveled trajectory of the vehicle obstacle may be a series of time-continuous sets of positions { Δ p- (T-1),.., Δ p-1, Δ p0}, where Δ pt is the two-dimensional displacement from time step T-1 to T, and T is the trajectory duration. For tracks with a duration less than T, the tracks may be padded with zeros to the left of the track representation set.
S320, determining a road coding result of the current position according to the map data information of the current position of the vehicle through the road coding sub-network.
The road coding sub-network can be a vector network (VectorNet), and the VectorNet network extracts the map road features by adopting a Polyline (Polyline) algorithm. And combining obstacles around the vehicle with a traffic scene through a VectorNet network to obtain the global characteristics of the vehicle, and coding the global characteristics to obtain a road coding result. Specifically, the geographical extension of the extracted road coding result may be a point, a polygon or a curve; all geographic entities may be approximated as polylines defined by a plurality of control points. The Polyline algorithm can be based on a selected starting point and orientation, evenly sampling key points at equal intervals, connecting adjacent key points into vectors, and obtaining a map broken line through a vectorization process.
Specifically, the road coding sub-network may be formed by a plurality of VectorNet networks, each of which includes a Multi-Layer perceptron (MLP) network Layer and a pooling Layer. The MLP network layer is used for extracting features between adjacent nodes, and the pooling layer is used for feature integration and splicing; and integrating the characteristics output by the VectorNet networks to obtain a road coding result finally output by the road coding sub-network. The MLP network layer may be a multilayer including a fully connected layers (FCs), a normalization layer (Norm), and an activation layer (Relu).
S330, analyzing the interactive fusion characteristics between the track coding result and the road coding result through the fusion interactive sub-network.
Optionally, the fusion interaction sub-network analyzes and fuses the encoding result from four dimensions according to the track encoding result and the road encoding result to obtain the interaction fusion feature. Specifically, interaction and analysis are performed in four dimensions, and then analysis results of the four dimensions are fused to obtain interaction fusion characteristics. The four dimensions may be vehicle obstacle to road (Actor2Lane), road to road (Lane2Lane), road to vehicle obstacle (Lane2Actor), vehicle obstacle to vehicle obstacle (Actor2 Actor). The four dimensions are formed by connecting a space attention layer, a linear layer and a residual error, the map features and the real-time traffic information are fused by introducing the real-time traffic information, lane information, traffic scenes, traffic signals and the like in a neural network function mode, and the fused information is transmitted to a vehicle barrier (namely Actor), so that the interaction of the road information and the track information is realized. Optionally, features may be extracted using Graph Convolutional Networks (GCN) for Lane2Lane dimensions.
Optionally, in this embodiment, the track coding result and the road coding result may be input into a fusion interaction subnetwork, and the fusion interaction subnetwork may perform interaction feature analysis on the track coding result and the road coding result from the above-described four dimensions to obtain an interaction fusion feature. The converged interaction subnetwork of the present embodiment masks roads or other vehicle obstacles that are relatively far from the vehicle obstacle with a mask (mask) that has a greater influence on the currently predicted vehicle obstacle than roads and other vehicle obstacles that are near the currently predicted vehicle obstacle. And then using an Attention mechanism (Attention) to the close-distance vehicle obstacles and the road to acquire the mutual interaction relationship between the lane and the vehicle obstacles. Therefore, the accuracy of interactive fusion feature extraction is ensured.
S340, determining the optional predicted track of the vehicle obstacle and the track confidence of the optional predicted track through the track prediction sub-network according to the interactive fusion characteristics.
Optionally, the trajectory prediction sub-network of this embodiment may include a classification branch and a regression branch composed of a Residual block (Residual block) and a Linear Layer (Linear Layer). Specifically, inputting the interactive fusion features into a linear normalization (LayerNorm) structure to obtain a regularization processing result; inputting the regularization processing result into a track prediction sub-network, generating a coordinate sequence of a predicted track, namely the predicted track, through regression branches by the track prediction sub-network, and giving a track confidence coefficient corresponding to each predicted track through classification branches.
And S350, determining the intention confidence of the selectable intention category corresponding to the selectable predicted track through the intention prediction sub-network according to the interactive fusion characteristics.
The intent prediction subnetwork may be composed of a multilayer perceptron (MLP) layer, a batch normalization (BatchNorm) layer, and a fully-connected layer, among others. Optionally, inputting the interactive fusion features into a LayerNorm structure to obtain a regularization processing result; and inputting the regularization processing result into an intention prediction sub-network to obtain optional intention categories predicted by the intention prediction sub-network for each predicted track and intention confidence degrees corresponding to the optional intention categories.
And S360, determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories.
Optionally, the track confidence of each predicted track and the intention confidence of the corresponding selectable intention category are subjected to weighted calculation, such as product or summation, to obtain a weighted score of the predicted track. And synthesizing all the weighted scores, and selecting the predicted track with the high weighted score as the target predicted track of the vehicle obstacle.
Further, after the vehicle obstacle target trajectory is determined, the trajectory coordinates may also be converted from the vehicle body coordinate system to a geographic coordinate system or a geodetic coordinate system.
According to the technical scheme of the embodiment, a track coding result is determined according to the running track of the vehicle obstacle through a track coding sub-network; determining a road coding result of the current position according to the map data information of the current position of the vehicle through a road coding sub-network; analyzing interactive fusion characteristics between the track coding result and the road coding result through a fusion interactive sub-network; determining selectable predicted tracks of the vehicle obstacles and track confidence degrees of the selectable predicted tracks according to the interactive fusion features through a track prediction sub-network; determining intention confidence degrees of selectable intention categories corresponding to the selectable predicted tracks according to the interactive fusion features through an intention prediction sub-network; and determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories. According to the method, the interactive fusion characteristics are extracted through the characteristic extraction network comprising the track coding sub-network, the road coding sub-network and the fusion interactive sub-network, the track and the intention are predicted through the prediction network comprising the track prediction sub-network and the intention prediction sub-network, the accuracy of the output results of the characteristic extraction network and the prediction network is improved through the more detailed sub-network, and the accuracy of the determined predicted track of the vehicle obstacle is further ensured.
Optionally, in this embodiment, a trajectory prediction model including a feature extraction network and a prediction network may be constructed in advance, where the feature extraction network further includes a trajectory coding sub-network, a road coding sub-network, and a converged interaction sub-network, and the prediction network further includes a trajectory prediction sub-network and an intention prediction sub-network, and the trajectory prediction model is obtained by training in advance based on a large number of training sample sets. Fig. 4 shows a flow chart for constructing the training sample set, as shown in fig. 4.
Optionally, in this embodiment, a sample travel track of the obstacle of the vehicle may be extracted according to vehicle surrounding environment data collected by the sensing device in history, map data information may be extracted according to positioning map data where the vehicle travels, and an obstacle intention label may be labeled on the sample travel track of the obstacle around the vehicle according to the vehicle surrounding environment data and the positioning map data. The intent tag may be a semantic tag indicating the intent of the vehicle obstacle while traveling in the map scene.
Because the vehicle surrounding environment data acquired by the sensing device is data which usually lasts for a long time, the extracted sample running track can be segmented according to a preset time interval, each segmented segment is further divided into two parts, one part is a historical track which is used as a track prediction model to input, and the other part is a real track (namely a track label corresponding to the historical track) which is predicted in a future time period. For example, if the present embodiment predicts the trajectory 2 seconds after the current time based on the current time and the trajectory 2 seconds before the current time, the trajectory may be divided into one segment every 5 seconds, where the first 3 seconds is the history trajectory and the second 2 seconds is the trajectory label corresponding to the history trajectory.
At the moment, a training sample set containing the segmented sample driving track, map data information and the intention of the vehicle barrier is marked can be obtained.
Optionally, training sample sets are collected according to different traffic scenarios. The transaction scenarios may include crossroads, t-junctions, vehicle lane changes, traffic merges, and the like.
Furthermore, because the surrounding environment data collected by the sensing equipment is influenced by environmental factors and the like, the missing part can be supplemented by adopting a nearest neighbor filling method and an interpolation method based on the condition that the sample driving track extracted by the sensing data is likely to have missing. The integrity of the sample running track can be ensured by supplementing the missing part of the sample running track, so that the model training effect is improved.
Further, in this embodiment, after obtaining a training sample set including the segmented sample driving track, the map data information, and the intention of the vehicle obstacle, the data in the training sample set may be further filtered, for example, to filter out ambiguous data of the intention, and to filter out data with problematic perceived track. And filtering and cleaning the data in the training sample set, and processing dirty data to obtain standard, clean and continuous data so as to facilitate model training.
FIG. 5 shows a flow chart for training a trajectory prediction model based on the training sample set constructed in FIG. 4.
Optionally, the data in the training sample set constructed in fig. 4 is preprocessed, for example, the data format is converted into a uniform format or an adaptive format.
Optionally, the sample track coding result may be obtained by inputting the sample travel track in the preprocessed sample data into the track coding sub-network, and the sample road coding result may be obtained by inputting the map data information in the preprocessed sample data into the road coding sub-network.
Optionally, the sample track coding result and the sample road coding result are input into a fusion interaction sub-network, and the interaction fusion characteristics between the two are obtained through analysis. Inputting the interactive fusion characteristics of the sample data into a track prediction sub-network to obtain a plurality of predicted tracks and track confidence coefficients of the predicted tracks; and inputting the interactive fusion features of the sample data into the intention prediction sub-network to obtain the optional intention category corresponding to the predicted track and the intention confidence thereof. Calculating a first loss according to the predicted track output by the track prediction sub-network, the track confidence of the predicted track and the track label; calculating a second loss according to the selectable intention category corresponding to the predicted track output by the intention prediction sub-network, the intention confidence coefficient of the selectable intention category and the semantic label; and optimizing the network parameters of the feature extraction network and the trajectory prediction sub-network according to the first loss, and optimizing the network parameters of the feature extraction network and the intention prediction sub-network according to the second loss, thereby completing the training of the model.
Optionally, the trajectory prediction sub-network and the intent prediction sub-network are trained based on different loss functions, that is, the first loss and the second loss are different. Specifically, a Loss function may be set for each network, for example, a regression Loss function (Smooth L1Loss) is used as a Loss function for regression branches in the trajectory prediction sub-network to calculate a difference between the predicted trajectory and the real trajectory, and a smaller difference indicates a higher accuracy of the predicted trajectory; and judging the confidence of the predicted track by adopting a hinge Loss function (Max Margin Loss) as a Loss function of the track confidence for the classification branches in the track prediction sub-network. The classification obtains the loss of the intention category by setting an activation function (Softmax) and a loss function, namely a cross entropy function (crossentrypyloss), for the intention prediction sub-network. And respectively adjusting the parameter settings of the corresponding networks according to the losses, so that the updated network prediction effect is more consistent with the real effect, the actual expectation is met, and the accuracy of model training is ensured.
Optionally, after the trajectory prediction model is trained in the manner described above, the trained trajectory prediction model may be evaluated, specifically, the Average Displacement Error (ADE) and the Final Displacement Error (FDE) may be used to evaluate the relevant network of the predicted trajectory, and the Average accuracy (mAP) is used to evaluate the relevant network of the obstacle prediction intention prediction. And judging whether the track prediction model reaches the preset precision or not according to the evaluation result, and whether the track prediction model can be used for actual prediction or not.
According to the technical scheme, a training sample set which comprises a sample traveling track, map data information and a vehicle obstacle intention is marked is firstly established, a track prediction model is trained on the basis of the established training sample set, track prediction and intention prediction are combined, a learning network is trained according to fusion interactive characteristics, the learning network is continuously perfected on the basis of a loss function, the precision of the track prediction model is judged according to different evaluation modes, the accuracy of model training is guaranteed, a novel scheme is provided for predicting the vehicle obstacle track, the vehicle obstacle track can be accurately predicted when the track prediction model is used subsequently, and the traveling safety of an automatic driving vehicle is guaranteed.
Example four
Fig. 6 is a schematic structural diagram of a trajectory prediction apparatus according to a fourth embodiment of the present invention. The embodiment may be applicable to the case of trajectory prediction, the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be integrated in any device that provides a trajectory prediction function, as shown in fig. 6, and the trajectory prediction apparatus may specifically include the following modules:
the fusion feature extraction module 610 is configured to extract an interactive fusion feature between a traveled trajectory of a vehicle obstacle and map data information of a current position of the vehicle through a feature extraction network;
the information prediction module 620 is used for determining the selectable predicted track of the vehicle obstacle, the track confidence coefficient of the selectable predicted track and the intention confidence coefficient of the selectable intention category corresponding to the selectable predicted track according to the interactive fusion characteristics through a prediction network;
and a target trajectory determination module 630, configured to determine a target predicted trajectory from the selectable predicted trajectories according to the trajectory confidence and the intention confidence of the selectable intention categories.
Optionally, the target track determining module 630 includes:
the confidence degree determining unit is used for determining the target confidence degree of the selectable predicted track under the corresponding selectable intention category according to the track confidence degree and the intention confidence degree of the selectable intention category;
and the target track determining unit is used for determining a target predicted track from the selectable predicted tracks according to the target confidence.
Optionally, the trajectory prediction apparatus further includes:
and the intention category determining module is used for determining a target intention category corresponding to the target predicted track after the target predicted track is determined.
In one embodiment of the invention, a feature extraction network comprises: the track coding sub-network, the road coding sub-network and the fusion interaction sub-network are connected in series;
accordingly, the fused feature extraction module 610 includes:
a trajectory code determination unit for determining a trajectory code result from a traveled trajectory of the vehicle obstacle through the trajectory code subnetwork;
a road code determination unit for determining a road code result of the current position according to the map data information of the current position of the vehicle through a road code subnetwork;
and the fusion characteristic analysis unit is used for analyzing the interaction fusion characteristics between the track coding result and the road coding result through the fusion interaction sub-network.
In one embodiment of the invention, a predictive network comprises: a trajectory prediction subnetwork and an intent prediction subnetwork;
accordingly, the information prediction module 620 includes:
the predicted track determining unit is used for determining optional predicted tracks of the vehicle barrier and track confidence degrees of the optional predicted tracks according to the interactive fusion features through the track prediction sub-network;
and the prediction intention determining unit is used for determining the intention confidence degree of the selectable intention category corresponding to the selectable prediction track according to the interactive fusion characteristics through the intention prediction sub-network.
Optionally, the trajectory prediction sub-network and the intent prediction sub-network are trained based on different loss functions.
Optionally, the trajectory prediction apparatus further includes:
the driving track determining module is used for determining the driving track of the vehicle obstacle according to the vehicle surrounding environment data acquired by the sensing equipment configured on the vehicle;
and the map information determining module is used for determining the map data information of the current position of the vehicle according to the high-precision map configured on the vehicle.
The trajectory prediction device provided by the embodiment of the invention can execute the trajectory prediction method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and fig. 7 shows a block diagram of an exemplary electronic device 12 suitable for implementing an embodiment of the present invention. The electronic device 12 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in FIG. 7, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory (cache 32). The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7 and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement the trajectory prediction method provided by the embodiment of the present invention:
extracting interactive fusion characteristics between a running track of a vehicle obstacle and map data information of the current position of the vehicle through a characteristic extraction network;
determining an optional predicted track of the vehicle obstacle, a track confidence coefficient of the optional predicted track and an intention confidence coefficient of an optional intention category corresponding to the optional predicted track through a prediction network according to the interactive fusion characteristics;
and determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories.
EXAMPLE six
A sixth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, where the computer program is used for executing the trajectory prediction provided by the embodiment of the present invention when the computer program is executed by a processor.
Computer storage media for embodiments of the present invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the embodiments of the present invention have been described in more detail through the above embodiments, the embodiments of the present invention are not limited to the above embodiments, and many other equivalent embodiments may be included without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A trajectory prediction method, characterized in that the method comprises:
extracting interactive fusion characteristics between a running track of a vehicle obstacle and map data information of the current position of the vehicle through a characteristic extraction network;
determining, by a prediction network, a selectable predicted trajectory of the vehicle obstacle, a trajectory confidence of the selectable predicted trajectory, and an intention confidence of a selectable intention category corresponding to the selectable predicted trajectory according to the interactive fusion features;
and determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories.
2. The method of claim 1, wherein determining a target predicted trajectory from the selectable predicted trajectories according to the trajectory confidence and the intent confidence of the selectable intent categories comprises:
determining a target confidence level of the selectable predicted trajectory under the corresponding selectable intention category according to the trajectory confidence level and the intention confidence level of the selectable intention category;
and determining a target predicted track from the selectable predicted tracks according to the target confidence.
3. The method of claim 1 or2, wherein after determining a target predicted trajectory from the alternative predicted trajectories, further comprising:
and determining a target intention category corresponding to the target prediction track.
4. The method of claim 1, wherein the feature extraction network comprises: the track coding sub-network, the road coding sub-network and the fusion interaction sub-network are connected in series;
correspondingly, the extracting, through the feature extraction network, the interactive fusion feature between the traveled trajectory of the vehicle obstacle and the map data information of the current position of the vehicle includes:
determining a track coding result according to the running track of the vehicle obstacle through the track coding sub-network;
determining a road coding result of the current position according to the map data information of the current position of the vehicle through the road coding sub-network;
and analyzing the interactive fusion characteristics between the track coding result and the road coding result through the fusion interactive sub-network.
5. The method of claim 1, wherein predicting the network comprises: a trajectory prediction subnetwork and an intent prediction subnetwork;
correspondingly, the determining, by the prediction network and according to the interactive fusion features, an optional predicted trajectory of the vehicle obstacle, a trajectory confidence of the optional predicted trajectory, an optional intention category corresponding to the optional predicted trajectory, and an intention confidence of the optional intention category includes:
determining, by the trajectory prediction subnetwork, selectable predicted trajectories of the vehicle obstacle and trajectory confidences of the selectable predicted trajectories according to the interactive fusion features;
and determining the intention confidence degree of the selectable intention category corresponding to the selectable prediction track through the intention prediction sub-network according to the interactive fusion characteristics.
6. The method of claim 5, wherein the trajectory prediction sub-network and the intent prediction sub-network are trained based on different loss functions.
7. The method according to any one of claims 1-6, further comprising:
determining the running track of a vehicle obstacle according to vehicle surrounding environment data acquired by sensing equipment configured on the vehicle;
and determining the map data information of the current position of the vehicle according to the high-precision map configured on the vehicle.
8. A trajectory prediction device, comprising:
the fusion feature extraction module is used for extracting interactive fusion features between the running track of the vehicle barrier and the map data information of the current position of the vehicle through a feature extraction network;
the information prediction module is used for determining an optional predicted track of the vehicle obstacle, a track confidence coefficient of the optional predicted track and an intention confidence coefficient of an optional intention category corresponding to the optional predicted track according to the interactive fusion characteristics through a prediction network;
and the target track determining module is used for determining a target predicted track from the selectable predicted tracks according to the track confidence and the intention confidence of the selectable intention categories.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the trajectory prediction method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a trajectory prediction method according to any one of claims 1 to 7.
CN202210610273.2A 2022-05-31 2022-05-31 Trajectory prediction method, apparatus, device and storage medium Pending CN114997307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210610273.2A CN114997307A (en) 2022-05-31 2022-05-31 Trajectory prediction method, apparatus, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210610273.2A CN114997307A (en) 2022-05-31 2022-05-31 Trajectory prediction method, apparatus, device and storage medium

Publications (1)

Publication Number Publication Date
CN114997307A true CN114997307A (en) 2022-09-02

Family

ID=83031305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210610273.2A Pending CN114997307A (en) 2022-05-31 2022-05-31 Trajectory prediction method, apparatus, device and storage medium

Country Status (1)

Country Link
CN (1) CN114997307A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761702A (en) * 2022-12-01 2023-03-07 广汽埃安新能源汽车股份有限公司 Vehicle track generation method and device, electronic equipment and computer readable medium
CN116562357A (en) * 2023-07-10 2023-08-08 深圳须弥云图空间科技有限公司 Click prediction model training method and device
CN117475090A (en) * 2023-12-27 2024-01-30 粤港澳大湾区数字经济研究院(福田) Track generation model, track generation method, track generation device, terminal and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761702A (en) * 2022-12-01 2023-03-07 广汽埃安新能源汽车股份有限公司 Vehicle track generation method and device, electronic equipment and computer readable medium
CN116562357A (en) * 2023-07-10 2023-08-08 深圳须弥云图空间科技有限公司 Click prediction model training method and device
CN116562357B (en) * 2023-07-10 2023-11-10 深圳须弥云图空间科技有限公司 Click prediction model training method and device
CN117475090A (en) * 2023-12-27 2024-01-30 粤港澳大湾区数字经济研究院(福田) Track generation model, track generation method, track generation device, terminal and medium

Similar Documents

Publication Publication Date Title
Park et al. Diverse and admissible trajectory forecasting through multimodal context understanding
US20230144209A1 (en) Lane line detection method and related device
CN112212874B (en) Vehicle track prediction method and device, electronic equipment and computer readable medium
CN114997307A (en) Trajectory prediction method, apparatus, device and storage medium
CN112015847B (en) Obstacle trajectory prediction method and device, storage medium and electronic equipment
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
CN115690153A (en) Intelligent agent track prediction method and system
Kolekar et al. Behavior prediction of traffic actors for intelligent vehicle using artificial intelligence techniques: A review
CN114360239A (en) Traffic prediction method and system for multilayer space-time traffic knowledge map reconstruction
Kim et al. Toward explainable and advisable model for self‐driving cars
CN114802303A (en) Obstacle trajectory prediction method, obstacle trajectory prediction device, electronic device, and storage medium
CN114627441A (en) Unstructured road recognition network training method, application method and storage medium
Kawasaki et al. Multimodal trajectory predictions for autonomous driving without a detailed prior map
Feng et al. Using appearance to predict pedestrian trajectories through disparity-guided attention and convolutional LSTM
CN115860102A (en) Pre-training method, device, equipment and medium for automatic driving perception model
Bharilya et al. Machine learning for autonomous vehicle's trajectory prediction: A comprehensive survey, challenges, and future research directions
Selvaraj et al. Edge learning of vehicular trajectories at regulated intersections
CN117237475A (en) Vehicle traffic track generation method and device based on diffusion generation model
CN113945222B (en) Road information identification method and device, electronic equipment, vehicle and medium
CN113119996B (en) Trajectory prediction method and apparatus, electronic device and storage medium
CN113537258B (en) Action track prediction method and device, computer readable medium and electronic equipment
CN116558541B (en) Model training method and device, and track prediction method and device
CN117036966B (en) Learning method, device, equipment and storage medium for point feature in map
Kim Explainable and Advisable Learning for Self-driving Vehicles
Luan et al. A modulized lane‐follower for driverless vehicles using multi‐frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination