CN112000756A - Method and device for predicting track, electronic equipment and storage medium - Google Patents

Method and device for predicting track, electronic equipment and storage medium Download PDF

Info

Publication number
CN112000756A
CN112000756A CN202010852421.2A CN202010852421A CN112000756A CN 112000756 A CN112000756 A CN 112000756A CN 202010852421 A CN202010852421 A CN 202010852421A CN 112000756 A CN112000756 A CN 112000756A
Authority
CN
China
Prior art keywords
feature vector
position information
target
target object
historical position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010852421.2A
Other languages
Chinese (zh)
Other versions
CN112000756B (en
Inventor
陶超凡
蒋沁宏
罗平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010852421.2A priority Critical patent/CN112000756B/en
Publication of CN112000756A publication Critical patent/CN112000756A/en
Application granted granted Critical
Publication of CN112000756B publication Critical patent/CN112000756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method, an apparatus, an electronic device and a storage medium for trajectory prediction, including: determining a plurality of historical position information of a target object according to the motion track of the target object; taking one piece of historical position information in the plurality of pieces of historical position information as target historical position information, and determining a feature vector queue matched with the target historical position information according to hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information; and determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.

Description

Method and device for predicting track, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of deep learning technologies, and in particular, to a method and an apparatus for trajectory prediction, an electronic device, and a storage medium.
Background
The trajectory prediction of the target objects is an important task, wherein the trajectory prediction of the target objects can predict the future trajectory of each target object in a complex scene through the acquired historical movement trajectory of each target object. Specifically, the trajectory prediction can be applied to the application fields of automatic driving automobiles, mobile robots, intelligent monitoring and the like, for example, accurate prediction of future trajectories of target objects in a scene can effectively help the automatic driving automobiles and the robots to plan safe paths, or predict collisions which may occur in a monitored scene.
Disclosure of Invention
In view of the above, the present disclosure provides at least a method, an apparatus, an electronic device and a storage medium for trajectory prediction.
In a first aspect, the present disclosure provides a method of trajectory prediction, including:
determining a plurality of historical position information of a target object according to the motion track of the target object;
taking one piece of historical position information in the plurality of pieces of historical position information as target historical position information, and determining a feature vector queue matched with the target historical position information according to hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information;
and determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.
By adopting the method, aiming at the target historical position information, the feature vector queue matched with the target historical position information is determined through the hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information, so that the obtained feature vector queue corresponding to the target historical position information comprises the correlation features of the plurality of pieces of historical position information in the time dimension, and the accuracy of the predicted track is higher when the predicted track is determined based on the last piece of historical position information containing the correlation features of the plurality of pieces of historical position information in the time dimension.
In one possible embodiment, after determining the feature vector queue matching the target historical location information, the method further comprises:
generating a hidden feature vector corresponding to the historical target position information according to the historical target position information and a feature vector queue matched with the historical target position information;
determining at least one predicted trajectory of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information, wherein the step of determining the at least one predicted trajectory of the target object comprises the following steps:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
and determining at least one predicted track of the target object according to the hidden feature vector corresponding to the last historical position information.
In one possible implementation, determining the hidden feature vector corresponding to the historical target location information according to the following steps includes:
generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, each hidden feature vector in the feature vector queue matched with the target historical position information, and a memory feature vector corresponding to each hidden feature vector.
The corresponding feature vector queue is matched with the target historical position information, the feature vector queue comprises hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information, and the hidden feature vectors corresponding to the target historical position information are generated on the basis of the hidden feature vectors corresponding to the plurality of pieces of historical position information before the target historical position information, the target historical position information and the memory feature vector corresponding to each hidden feature vector, so that the generated hidden feature vectors corresponding to the target historical position information comprise the correlation features of the plurality of pieces of historical position information before the target historical position information in the time dimension, and data support is provided for the subsequent generation of a relatively accurate prediction track.
In one possible embodiment, generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, each hidden feature vector in the feature vector queue matching the target historical position information, and a memory feature vector corresponding to each hidden feature vector includes:
generating an average feature vector corresponding to the feature vector queue based on each hidden feature vector in the feature vector queue matched with the target historical position information;
generating a memory feature vector corresponding to the target historical position information based on the target historical position information, the average feature vector, each hidden feature vector in the feature vector queue, and a memory feature vector corresponding to each hidden feature vector in the feature vector queue;
and generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, the average feature vector and a memory feature vector corresponding to the target historical position information.
Here, the procedure of generating the hidden feature vector corresponding to the target historical position information is described, and data support is provided for generating the predicted trajectory of the target object based on the hidden feature vector corresponding to the last target historical position information of the target object and the image feature vector corresponding to the target object.
In a possible implementation manner, the method, after generating a hidden feature vector corresponding to the historical target location information according to the historical target location information and a feature vector queue matched with the historical target location information, further includes:
based on hidden feature vectors corresponding to a plurality of pieces of historical position information of each target object in the plurality of target objects, adjusting the hidden feature vector corresponding to the target historical position information of a first target object and the hidden feature vector corresponding to the historical position information before the target historical position information to generate an adjusted hidden feature vector corresponding to the target historical position information; the first target object is any one of a plurality of target objects;
determining at least one predicted trajectory of the target object according to the hidden feature vector corresponding to the last historical position information, including:
and determining at least one predicted track of the target object according to the adjusted hidden feature vector corresponding to the last historical position information.
Here, when at least one target object is a plurality of target objects, considering that the moving trajectories of different target objects have mutual influence, the hidden feature vector corresponding to the plurality of pieces of historical position information of each of the plurality of target objects may be adjusted using the hidden feature vector corresponding to the plurality of pieces of historical position information of the first target object and the hidden feature vector corresponding to the piece of historical position information before the target historical position information, so that the adjusted hidden feature vector includes the motion interaction features between the different target objects, and further, the predicted trajectory of each target object may be determined more accurately based on the adjusted hidden feature vector.
In one possible embodiment, the generating an adjusted hidden feature vector corresponding to the target historical position information by adjusting a hidden feature vector corresponding to the target historical position information of a first target object and a hidden feature vector corresponding to historical position information before the target historical position information based on hidden feature vectors corresponding to a plurality of pieces of historical position information of each of the plurality of target objects includes:
generating association degrees between the first target object and the plurality of target objects respectively based on any hidden feature vector to be adjusted corresponding to the first target object and hidden feature vectors of other target objects corresponding to historical position information, and generating normalization factors based on the association degrees;
and generating an adjusted hidden feature vector corresponding to any hidden feature vector to be adjusted based on the hidden feature vector to be adjusted, the normalization factor, the association degree between the first target object and the plurality of target objects respectively, and the linearly transformed hidden feature vectors corresponding to other target objects.
In the above embodiment, the adjustment process of the hidden feature vector to be adjusted of each target object is described, which provides data support for subsequently generating the predicted trajectory of the target object based on the adjusted hidden feature vector corresponding to the last historical position information of the target object and the image feature vector corresponding to the target object.
In a possible embodiment, the method further comprises:
determining at least one image characteristic vector corresponding to the target object according to the scene image of the scene where the target object is located;
determining at least one predicted trajectory of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information, wherein the step of determining the at least one predicted trajectory of the target object comprises the following steps:
and determining at least one predicted track of the target object according to the last historical position information of the target object, the feature vector queue matched with the last historical position information and at least one image feature vector corresponding to the target object.
In the embodiment of the disclosure, at least one predicted track of each target object is generated based on the last position information of each target object, the feature vector queue matched with the last historical position information, and at least one image feature vector corresponding to the target object, because the scene image has a higher influence on track prediction of the target object, for example, at an intersection position, the target object has more movable directions; when the target object is in a narrow lane, the moving direction of the target object is less, so that the accuracy of the predicted track can be improved when the predicted track of the target object is determined by combining the image feature vectors.
In a possible embodiment, determining at least one predicted trajectory of the target object according to the last historical position information of the target object, the feature vector queue matched with the last historical position information, and at least one image feature vector corresponding to the target object includes:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
respectively cascading the hidden feature vector corresponding to the last historical position information with at least one image feature vector corresponding to the target object to generate at least one predicted feature vector;
and generating at least one predicted track corresponding to the target object based on each predicted feature vector.
In a possible implementation, determining at least one image feature vector corresponding to the target object according to the scene image includes:
determining a semantic image corresponding to the scene image based on the scene image;
extracting the features of the semantic image to obtain an intermediate feature vector corresponding to the semantic image;
obtaining a mean vector and a variance vector corresponding to the target object based on the intermediate feature vector and the plurality of historical position information of the target object;
and generating at least one image feature vector corresponding to the target object based on the mean vector and the variance vector corresponding to the target object.
In the above embodiment, the corresponding mean vector and variance vector are generated for each target object, and then for the mean vector and variance vector corresponding to each target object, the corresponding at least one image feature vector is determined for each target object, and then when the predicted trajectory of the target object is determined by combining the at least one image feature vector corresponding to the target object, the accuracy of the predicted trajectory can be improved.
In one possible embodiment, the trajectory prediction method is performed by a neural network, which is trained by:
acquiring training samples, wherein the training samples have labeling tracks, and each training sample comprises a scene image sample and sample historical position information of a sample object;
generating at least one predicted track corresponding to the training sample based on the training sample by using the neural network;
randomly selecting two hidden feature vectors from the hidden feature vectors corresponding to the historical position information of each sample, and determining the regular loss based on the two selected hidden feature vectors; generating a prediction loss based on the prediction track and the labeling track;
and adjusting network parameters of the neural network based on the prediction loss and the regular loss to obtain the trained neural network.
Here, considering that the movement trajectory of the target object is a coherent motion process, motion features close in time generally have correlation, and motion features distant in time generally do not correlate. Therefore, two hidden feature vectors can be randomly selected from the hidden feature vectors corresponding to the historical position information of each sample, regular loss is determined based on the two selected hidden feature vectors, and the neural network is trained through the prediction loss and the increased regular loss, so that the accuracy of the trained neural network is higher.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides an apparatus for trajectory prediction, comprising:
the first determination module is used for determining a plurality of historical position information of a target object according to the motion track of the target object;
a second determining module, configured to use one piece of historical location information in the plurality of pieces of historical location information as target historical location information, and determine, according to hidden feature vectors corresponding to a plurality of pieces of historical location information before the target historical location information, a feature vector queue that matches the target historical location information;
and the third determining module is used for determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of trajectory prediction as set forth in the first aspect or any one of the embodiments above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of trajectory prediction according to the first aspect or any one of the embodiments described above.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method of trajectory prediction provided by an embodiment of the present disclosure;
fig. 2 is a schematic flowchart illustrating a process of generating a hidden feature vector corresponding to target historical position information in a method for predicting a track according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating a neural network-based determination of a predicted trajectory in a trajectory prediction method provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an architecture of a trajectory prediction apparatus provided in an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device 500 provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
At present, the trajectory prediction of a target object, such as the trajectory prediction of a pedestrian or a vehicle, may be applied in various fields, such as the field of automatic driving, the field of robots, and the field of intelligent monitoring. The accurate prediction of the target object track can play an important role in the development of various fields such as the automatic driving field, the robot field and the intelligent monitoring field. Therefore, in order to realize accurate prediction of a target object track, the embodiment of the present disclosure provides a track prediction method.
For the convenience of understanding the embodiments of the present disclosure, a method for predicting a trajectory disclosed in the embodiments of the present disclosure will be described in detail first.
Referring to fig. 1, a schematic flow chart of a method for trajectory prediction provided in an embodiment of the present disclosure is shown, where the method includes S101-S103, where:
s101, determining a plurality of historical position information of the target object according to the motion track of the target object.
S102, taking one piece of historical position information in the plurality of pieces of historical position information as target historical position information, and determining a characteristic vector queue matched with the target historical position information according to hidden characteristic vectors corresponding to the plurality of pieces of historical position information before the target historical position information.
S103, determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.
By adopting the method, aiming at the target historical position information, the feature vector queue matched with the target historical position information is determined through the hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information, so that the obtained feature vector queue corresponding to the target historical position information comprises the correlation features of the plurality of pieces of historical position information in the time dimension, and the accuracy of the predicted track is higher when the predicted track is determined based on the last historical position information containing the correlation features of the plurality of pieces of historical position information in the time dimension.
For S101:
here, the motion trajectory of the target object is determined, and a plurality of pieces of historical position information are sampled from the motion trajectory of the target object. The motion trajectory of the target object may be a set of position information of the target object moving in the historical time period, and the position information may be coordinate information in a set pixel coordinate system.
Illustratively, a scene image can be obtained, and the scene image can be a bird's eye view of any scene; and a plurality of continuous historical position information of each target object included in the scene image can be acquired, namely, the historical movement track of each target object is acquired. Wherein, the number of the target objects can be one or more; the target object may be a pedestrian, a motor vehicle, a non-motor vehicle, or the like.
For S102 and S103:
here, each of the plurality of pieces of historical position information is set as target historical position information, and a feature vector queue matching the target historical position information is determined from hidden feature vectors corresponding to a plurality of pieces of historical position information preceding the target historical position information. For example, hidden feature vectors corresponding to 3 pieces of historical position information before the target historical position information can be obtained; and determining a feature vector queue formed by the acquired 3 hidden feature vectors as a feature vector queue matched with the target historical position information. And further determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.
In one possible embodiment, after determining the feature vector queue matching the target historical location information, the method further includes:
generating a hidden feature vector corresponding to the historical target position information according to the historical target position information and a feature vector queue matched with the historical target position information;
determining at least one predicted trajectory of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information, wherein the step of determining the at least one predicted trajectory of the target object comprises the following steps:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
and determining at least one predicted track of the target object according to the hidden feature vector corresponding to the last historical position information.
Here, when there are a plurality of target objects, the target calendar may be sequentially selected from the plurality of pieces of historical position information for each target object in accordance with the movement trajectory of the target objectHistorical position information, for example, for the target object a, if ten consecutive historical positions of the target object a are obtained, that is, the multiple consecutive historical position information of the target object a includes the historical position information M1Historical position information M2… …, historical position information M10The moving track of the target object is that the target object A moves to the historical position information M first1At the corresponding position, finally moving to the historical position information M10At the corresponding position, according to the motion track of the target object, the historical position information M is firstly processed1As the target historical position information of the target object A, the historical position information M2As target historical position information of the target object A until the historical position information M is obtained10As target historical location information for target object a.
Here, the number of the hidden feature vectors included in the feature vector queue may be set as needed. If the feature vector queue comprises 3 hidden feature vectors, the historical position information M of the targettThe corresponding feature vector queue may be
Figure BDA0002645176610000091
I.e. q is 3. Wherein h ist-1May be historical location information Mt-1A corresponding hidden feature vector; h ist-2May be historical location information Mt-2Corresponding hidden feature vectors, etc.
For example, the historical location information M may be first obtained1As the target historical position information of the target object A, acquiring a characteristic vector queue matched with the target historical position information
Figure BDA0002645176610000092
Due to historical location information M1There is no historical location information before, so this list of eigenvector queues
Figure BDA0002645176610000093
Each hidden feature vector in the hidden feature vector can be a preset initial feature vector h-2、h-1、h0Are all initial feature vectors. WhereinThe initial feature vector may be a feature vector in which each feature value is a preset value, for example, the preset value may be 0. Further, the location information may be based on historical location information M1And matched feature vector queue
Figure BDA0002645176610000101
Generating historical location information M1Corresponding hidden feature vector h1
Next, historical location information M may be analyzed2As the target historical position information of the target object A, acquiring a characteristic vector queue matched with the target historical position information
Figure BDA0002645176610000102
Wherein h is-1、h0Are all initial feature vectors, h1For the historical position information M generated in the above process1The corresponding hidden feature vector. Further, the location information may be based on historical location information M2And matched feature vector queue
Figure BDA0002645176610000103
Generating historical location information M2Corresponding hidden feature vector h2. Based on the same process, the historical location information M can be finally obtained10As the target historical position information of the target object A, acquiring a characteristic vector queue matched with the target historical position information
Figure BDA0002645176610000104
Wherein h is9For generating historical position information M9Corresponding latent feature vector, h8For generating historical position information M8Corresponding latent feature vector, h7For generating historical position information M7The corresponding hidden feature vector. Further, the location information may be based on historical location information M10And matched feature vector queue
Figure BDA0002645176610000105
Generating historical location information M10Corresponding hidden feature vector h10. Namely, a hidden feature vector corresponding to the last historical position information in a plurality of continuous historical position information is generated.
As an alternative embodiment, the hidden feature vector corresponding to the target historical location information may be determined according to the following steps: and generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, each hidden feature vector in the feature vector queue matched with the target historical position information and the memory feature vector corresponding to each hidden feature vector.
Here, each hidden feature vector corresponds to a memory feature vector, e.g., hidden feature vector h10Corresponding to a memory feature vector c10. The feature vector queue corresponds to a memory cell queue, e.g., a feature vector queue
Figure BDA0002645176610000106
Corresponding to a memory cell queue
Figure BDA0002645176610000107
Here, the hidden feature vector and the Memory feature vector may be feature vectors generated by Long and Short Term Memory cells in a Long Short Term Memory Network (LSTM).
With historical location information M10For example, the target historical location information may be based on the target historical location information M10And the characteristic vector queue matched with the target historical position information
Figure BDA0002645176610000108
And a memory feature vector corresponding to each hidden feature vector
Figure BDA0002645176610000109
Generating a hidden feature vector h corresponding to the historical position information of the target10. Here, c7As a hidden feature vector h7Corresponding memory feature vector, c8As a hidden feature vector h8Corresponding memory feature vector, c9As a hidden feature vector h9Corresponding noteFeature vectors are recalled.
In the above embodiment, a corresponding feature vector queue is matched for the target historical position information, the feature vector queue includes hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information, and the hidden feature vectors corresponding to the target historical position information are generated based on the hidden feature vectors corresponding to the plurality of pieces of historical position information before the target historical position information, and the memory feature vector corresponding to each hidden feature vector, so that the generated hidden feature vectors corresponding to the target historical position information include associated features in a time dimension between the plurality of pieces of historical position information before the target historical position information, and data support is provided for subsequently generating a relatively accurate predicted track.
As an optional implementation, generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, each hidden feature vector in the feature vector queue matched with the target historical position information, and a memory feature vector corresponding to each hidden feature vector, includes:
firstly, generating an average feature vector corresponding to a feature vector queue based on each hidden feature vector in the feature vector queue matched with the target historical position information.
And secondly, generating a memory feature vector corresponding to the target historical position information based on the target historical position information, the average feature vector, each hidden feature vector in the feature vector queue and a memory feature vector corresponding to each hidden feature vector in the feature vector queue.
And thirdly, generating a hidden feature vector corresponding to the historical target position information based on the historical target position information, the average feature vector and the memory feature vector corresponding to the historical target position information.
Referring to fig. 2, a schematic flow chart of generating a hidden feature vector corresponding to historical target position information in a method for predicting a track is shown. For example, step one, step two, and step three will be described below with reference to fig. 2.
Explaining the step one, each hidden feature vector in the feature vector queue may be added, and then averaged to obtain an average feature vector corresponding to the feature vector queue. Namely, the average feature vector corresponding to the feature vector queue can be calculated by using the following formula (1):
Figure BDA0002645176610000111
wherein q is the number of hidden feature vectors in the feature vector queue,
Figure BDA0002645176610000112
and (3) for each hidden feature vector in the feature vector queue corresponding to the target object i, i is the identifier of the target object, and q, i and l are positive integers.
Illustratively, for a target object a (the identifier i corresponding to the target object a may be 1), and target historical position information (historical position information)
Figure BDA0002645176610000113
) The matched feature vector queue is
Figure BDA0002645176610000114
The latent feature vector may be added
Figure BDA0002645176610000115
Latent feature vector
Figure BDA0002645176610000116
Latent feature vector
Figure BDA0002645176610000117
Adding the corresponding characteristic values to average to obtain a characteristic vector queue
Figure BDA0002645176610000118
Corresponding average feature vector
Figure BDA0002645176610000119
Where q is 3.
Referring to step two, as can be seen from FIG. 2, the target location information may be based on historical target location information
Figure BDA00026451766100001110
Mean feature vector
Figure BDA00026451766100001111
Obtaining a first intermediate feature vector
Figure BDA00026451766100001112
And obtaining a second intermediate feature vector
Figure BDA00026451766100001113
Specifically, the calculation can be obtained according to the formula (2)
Figure BDA0002645176610000121
Equation (2) is as follows:
Figure BDA0002645176610000122
can be calculated according to the formula (3)
Figure BDA0002645176610000123
Equation (3) is as follows:
Figure BDA0002645176610000124
wherein, Wu、Uu、buAs a first network element parameter, Wg、Ug、bgIs a second network element parameter. σ (-) is the activation function operation. W, U are different weight matrices, and b is an offset vector.
Meanwhile, a third intermediate feature vector corresponding to each hidden feature vector can be generated based on the historical position information of the target and each hidden feature vector in the feature vector queue
Figure BDA0002645176610000125
I.e. feature vectors corresponding to forgetting gates comprised in LSTM
Figure BDA0002645176610000126
Specifically, it can be calculated according to the following formula (4)
Figure BDA0002645176610000127
Figure BDA0002645176610000128
Wherein, Wf、Uf、bfFor the first network element parameter, the parameter in FIG. 2 can be obtained
Figure BDA0002645176610000129
Figure BDA00026451766100001210
Then, the first intermediate feature vector can be based
Figure BDA00026451766100001211
Second intermediate feature vector
Figure BDA00026451766100001212
Third intermediate feature vector corresponding to each hidden feature vector
Figure BDA00026451766100001213
And memory feature vectors corresponding to each hidden feature vector in the feature vector queue
Figure BDA00026451766100001214
Generating memory characteristic vector corresponding to target historical position information
Figure BDA00026451766100001215
Specifically, it can be calculated according to the following formula (5)Is calculated to obtain
Figure BDA00026451766100001216
Figure BDA00026451766100001217
Wherein, the operation symbol multiplied by the element.
Here, the historical position information M is1As the target historical position information of the target object A, the characteristic vector queue matched with the target historical position information is
Figure BDA00026451766100001218
Due to historical location information M1There is no historical location information before, so the eigenvector queue
Figure BDA00026451766100001219
Middle h-2、h-1、h0Are all initial feature vectors, at this point, the feature vector queue
Figure BDA00026451766100001220
The memory feature vector corresponding to each hidden feature vector in (a) is also an initial feature vector, that is, the initial feature vector may be a feature vector with each feature value being zero.
Step three is explained, and meanwhile, the target can be based on the historical position information of the target
Figure BDA00026451766100001221
Mean feature vector
Figure BDA00026451766100001222
Obtaining a fourth intermediate feature vector
Figure BDA00026451766100001223
Specifically, it can be calculated according to the formula (6)
Figure BDA00026451766100001224
Equation (6) is as follows:
Figure BDA00026451766100001225
and finally, generating a hidden feature vector corresponding to the target historical position information according to the target historical position information, the average feature vector and the memory feature vector corresponding to the target historical position information. Specifically, the hidden feature vector corresponding to the target historical position information can be calculated and obtained according to the formula (7)
Figure BDA0002645176610000131
Figure BDA0002645176610000132
Further, an updated feature vector queue may be generated based on the hidden feature vector corresponding to the target historical position information and the hidden feature vector corresponding to the historical position information before the target historical position information, and an updated memory cell queue may be generated based on the memory feature vector corresponding to the target historical position information and the memory feature vector corresponding to the historical position information before the target historical position information, that is, the updated feature vector queue is shown in fig. 2
Figure BDA0002645176610000133
The updated memory cell queue is shown in FIG. 2
Figure BDA0002645176610000134
And the updated feature vector queue is also a feature vector queue corresponding to the adjacent historical position information after the target historical position information.
For example, for a target object a (the identifier i corresponding to the target object a may be 1), and target historical position information (historical position information)
Figure BDA0002645176610000135
) The matched feature vector queue is
Figure BDA0002645176610000136
Figure BDA0002645176610000137
Calculating a corresponding hidden feature vector of the historical position information of the target
Figure BDA0002645176610000138
The process of (2) may be: average feature vector corresponding to feature vector queue obtained by formula (1)
Figure BDA0002645176610000139
Obtaining a first intermediate feature vector corresponding to the target feature position information by using a formula (2) and a formula (3)
Figure BDA00026451766100001310
And a second intermediate feature vector
Figure BDA00026451766100001311
And then calculating by using a formula (4) to obtain a third intermediate feature vector corresponding to each hidden feature vector in the feature vector queue
Figure BDA00026451766100001312
And a memory characteristic vector corresponding to the target historical position information is obtained by using a formula (5)
Figure BDA00026451766100001313
Meanwhile, a fourth intermediate feature vector can be obtained by calculation by using the formula (6)
Figure BDA00026451766100001314
Finally, a formula (7) is used for calculating to obtain a hidden feature vector corresponding to the historical position information of the target
Figure BDA00026451766100001315
In the above embodiment, the step of generating the hidden feature vector corresponding to the target historical position information is described, which provides data support for subsequently generating the predicted trajectory of the target object based on the hidden feature vector corresponding to the last target historical position information of the target object and the image feature vector corresponding to the target object.
In specific implementation, the number of the hidden feature vectors included in the feature vector queue matched with the target historical position information, that is, the number of q described in the above process, may be set, and when q is 3, the hidden feature vectors corresponding to 3 pieces of historical position information before the target historical position information may be acquired, and the feature vector queue matched with the target historical position information may be formed by the acquired 3 hidden feature vectors.
In the above embodiment, a feature vector queue matching the target historical position information is obtained, where the feature vector queue includes hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information, and data support is provided for determining the hidden feature vectors corresponding to the target historical position information.
In an optional implementation manner, when a target object is a plurality of target objects, after generating a hidden feature vector corresponding to target historical position information according to the target historical position information and a feature vector queue matched with the target historical position information, the method further includes: based on the hidden feature vectors corresponding to the plurality of pieces of historical position information of each target object in the plurality of target objects, adjusting the hidden feature vector corresponding to the target historical position information of the first target object and the hidden feature vector corresponding to the historical position information before the target historical position information to generate an adjusted hidden feature vector corresponding to the target historical position information; the first target object is any one of a plurality of target objects.
Determining at least one predicted track of the target object according to the hidden feature vector corresponding to the last historical position information, wherein the method comprises the following steps: and determining at least one predicted track of the target object according to the adjusted hidden feature vector corresponding to the last historical position information.
In the embodiment of the present disclosure, it is considered that the movement trajectories of a plurality of target objects may have an influence, for example, the movement trajectory of the target object a may have an influence on the movement trajectory of the target object B. Therefore, after the hidden feature vector corresponding to the target historical position information of each target object (i.e., the first target object) is generated, a plurality of hidden feature vectors of each target object may be adjusted to generate an adjusted hidden feature vector corresponding to the target historical position information.
After the hidden feature vector corresponding to the target historical position information of any target object is generated, the feature vector queue corresponding to the target historical position information can be updated by using the hidden feature vector corresponding to the target historical position information, each hidden feature vector in the updated feature vector queue is adjusted, and an adjusted feature vector queue is generated, wherein the adjusted feature vector queue comprises the adjusted hidden feature vector corresponding to the target historical position information, and then the adjusted feature vector queue can be determined as the next feature vector queue matched with the target historical position information.
For example, historical location information of the object is generated
Figure BDA0002645176610000141
Corresponding hidden feature vector
Figure BDA0002645176610000142
The historical location information of the object
Figure BDA0002645176610000143
The corresponding feature vector queue is
Figure BDA0002645176610000144
The updated feature vector queue is
Figure BDA0002645176610000145
And then can be aligned with
Figure BDA0002645176610000146
Making an adjustment, and adjustingAfter that
Figure BDA0002645176610000147
Determining historical location information for a next target
Figure BDA0002645176610000148
Corresponding feature vector queue, adjusted
Figure BDA0002645176610000149
For target historical location information
Figure BDA00026451766100001410
Corresponding, adjusted hidden feature vector, and next target historical location information
Figure BDA00026451766100001411
The corresponding characteristic vector queue is the adjusted characteristic vector queue
Figure BDA00026451766100001412
Namely, it is
Figure BDA00026451766100001413
Figure BDA00026451766100001414
Implicit feature vectors included
Figure BDA00026451766100001415
And
Figure BDA00026451766100001416
all are generated adjusted hidden feature vectors.
Illustratively, for any target object (i.e. the first target object), the target historical position information of the first target object is obtained
Figure BDA0002645176610000151
Corresponding hidden feature vector
Figure BDA0002645176610000152
Thereafter, historical location information may be applied to the first target
Figure BDA0002645176610000153
Corresponding hidden feature vector
Figure BDA0002645176610000154
Historical location information
Figure BDA0002645176610000155
Corresponding hidden feature vector
Figure BDA0002645176610000156
Historical location information
Figure BDA0002645176610000157
Corresponding hidden feature vector
Figure BDA0002645176610000158
Adjusting the hidden feature vector
Figure BDA0002645176610000159
Adjusted latent eigenvector
Figure BDA00026451766100001510
Adjusted latent eigenvector
Figure BDA00026451766100001511
Determining next target historical location information for the first target object
Figure BDA00026451766100001512
A corresponding feature vector queue.
Here, when at least one target object is a plurality of target objects, considering that the moving trajectories of different target objects have mutual influence, the hidden feature vector corresponding to the plurality of pieces of historical position information of each of the plurality of target objects may be adjusted using the hidden feature vector corresponding to the plurality of pieces of historical position information of the first target object and the hidden feature vector corresponding to the piece of historical position information before the target historical position information, so that the adjusted hidden feature vector includes the motion interaction features between the different target objects, and further, the predicted trajectory of each target object may be determined more accurately based on the adjusted hidden feature vector.
In an optional implementation, adjusting, based on hidden feature vectors corresponding to a plurality of pieces of historical location information of each of a plurality of target objects, a hidden feature vector corresponding to target historical location information of a first target object and a hidden feature vector corresponding to historical location information before the target historical location information to generate an adjusted hidden feature vector corresponding to the target historical location information may include:
step one, generating association degrees between the first target object and the plurality of target objects respectively based on any hidden feature vector to be adjusted corresponding to the first target object and hidden feature vectors of other target objects corresponding to historical position information, and generating a normalization factor based on each association degree.
And secondly, generating an adjusted hidden feature vector corresponding to any hidden feature vector to be adjusted based on any hidden feature vector to be adjusted, the normalization factor, the association degree between the first target object and the plurality of target objects respectively and the linearly transformed hidden feature vectors corresponding to other target objects.
In specific implementation, each hidden feature vector to be adjusted of each target object can be adjusted through the following formulas (8) and (9):
Figure BDA00026451766100001513
Figure BDA00026451766100001514
where N (i) is the number of neighboring target objects of the target object i, i.e. the number of the plurality of target objects is 3, thenN (i) is 2, and j is the identification of the target object other than the target object i in the plurality of target objects.
Figure BDA00026451766100001515
For the degree of association between the target object i and at least one neighboring target object respectively,
Figure BDA0002645176610000161
in order to normalize the factors, the method comprises the steps of,
Figure BDA0002645176610000162
Figure BDA0002645176610000163
are arithmetic symbols added element by element. Wθ、Wφ、WgIs the adjusted network parameter.
For example, if the plurality of target objects include a target object a, a target object B, and a target object C, the hidden feature vector to be adjusted corresponding to each target object includes
Figure BDA0002645176610000164
Here, i is 1, 2, and 3, that is, the identifier i corresponding to the target object a is 1, the identifier i corresponding to the target object B is 2, the identifier i corresponding to the target object C is 3, l is 0, 1, 2, and n (i) is 2, that is, j is 1 and 2.
Aiming at the target object A, if the hidden feature vector to be adjusted of the target object A is required to be adjusted
Figure BDA0002645176610000165
Then, the hidden feature vector to be adjusted of the target object A can be obtained
Figure BDA0002645176610000166
Hidden feature vector of target object B
Figure BDA0002645176610000167
And hidden feature vector of target object C
Figure BDA0002645176610000168
Obtaining the adjusted hidden feature vector of the target object A by using a formula (8) and a formula (9)
Figure BDA0002645176610000169
Further, each adjusted hidden feature vector corresponding to each target object can be obtained.
In the above embodiment, the adjustment process of the hidden feature vector to be adjusted of each target object is described, which provides data support for subsequently generating the predicted trajectory of the target object based on the adjusted hidden feature vector corresponding to the last target historical position information of the target object and the image feature vector corresponding to the target object.
In one possible embodiment, the method further comprises: determining at least one image characteristic vector corresponding to the target object according to the scene image of the scene where the target object is located;
determining at least one predicted trajectory of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information, wherein the steps of: and determining at least one predicted track of the target object according to the last historical position information of the target object, the feature vector queue matched with the last historical position information and at least one image feature vector corresponding to the target object.
Here, a scene image of a scene in which the target object is located may be acquired, where the scene image may be a bird's eye view including the target object, and then at least one image feature vector corresponding to the target object may be determined based on the scene image. And further determining at least one predicted track of the target object based on the last historical position information of the target object, the feature vector queue matched with the last historical position information and at least one image feature vector corresponding to the target object.
In the embodiment of the disclosure, at least one predicted track of each target object is generated based on the last position information of each target object, the feature vector queue matched with the last historical position information, and at least one image feature vector corresponding to the target object, because the scene image has a higher influence on track prediction of the target object, for example, at an intersection position, the target object has more movable directions; when the target object is in a narrow lane, the moving direction of the target object is less, so that the accuracy of the predicted track can be improved when the predicted track of the target object is determined by combining the image feature vectors.
Here, the predicted trajectory may be configured by a plurality of pieces of predicted position information. Determining at least one predicted track of the target object according to the last historical position information of the target object, the feature vector queue matched with the last historical position information and at least one image feature vector corresponding to the target object, wherein the step of determining the at least one predicted track of the target object comprises the following steps:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
secondly, respectively cascading the hidden feature vector corresponding to the last historical position information of each target object with at least one image feature vector corresponding to the target object determined according to the scene image to generate at least one predicted feature vector;
and thirdly, generating at least one predicted track corresponding to the target object based on each predicted feature vector.
In specific implementation, a hidden feature vector corresponding to the last historical position information may be generated according to the last historical position information of the target object and a feature vector queue matched with the last historical position information. Then, the hidden feature vector corresponding to the last historical position information is respectively cascaded with at least one image feature vector corresponding to the target object to generate at least one predicted feature vector; and generating at least one predicted track corresponding to the target object based on each predicted feature vector.
For example, if each image feature vector corresponding to the target object a includes: the image feature vector I, the image feature vector II and the image feature vector III can be used for cascading a hidden feature vector corresponding to the last historical position information of the target object A with the image feature vector I to generate a predicted feature vector I; cascading a hidden feature vector corresponding to the last historical position information of the target object A with an image feature vector II to generate a predicted feature vector II; cascading a hidden feature vector corresponding to the last historical position information of the target object A with an image feature vector III to generate a predicted feature vector III; and then three predicted tracks of the target object A are generated based on the first predicted characteristic vector, the second predicted characteristic vector and the third predicted characteristic vector.
Determining at least one image feature vector corresponding to the target object according to the scene image may include:
firstly, determining a semantic image corresponding to a scene image based on the scene image.
And secondly, extracting features of the semantic image to obtain an intermediate feature vector corresponding to the semantic image.
And thirdly, obtaining a mean vector and a variance vector corresponding to the target object based on the intermediate feature vector and a plurality of historical position information corresponding to the target object.
And fourthly, generating at least one image feature vector corresponding to the target object based on the mean vector and the variance vector corresponding to the target object.
Here, the scene image may be input to a semantic detection neural network trained in advance, and a semantic image corresponding to the scene image may be generated. Alternatively, the size of the scene image may be adjusted (to a preset size), and the adjusted scene image may be input to a pre-trained semantic detection neural network to generate a semantic image corresponding to the scene image. Here, the semantic image corresponding to the scene image may be generated by using a pre-trained semantic detection neural network in an offline manner, so as to reduce the time for generating the predicted trajectory of the target object and improve the generation efficiency of the predicted trajectory.
Feature extraction can be carried out on the semantic image to obtain an intermediate feature vector corresponding to the semantic image; and fusing the intermediate feature vector corresponding to the semantic image with a plurality of pieces of historical position information corresponding to each target object, and inputting the fused feature vector corresponding to each target object into the full-connection layer for processing to obtain a mean vector and a variance vector corresponding to each target object. For example, the fusion mode may be that the intermediate feature vector corresponding to the semantic image and the plurality of pieces of continuous historical position information corresponding to each target object are added element by element to obtain a fused feature vector corresponding to the target object. Finally, at least one image feature vector corresponding to each target object can be generated by using a re-parameterization skill based on the mean vector and the variance vector corresponding to the target object. For example, a normal distribution corresponding to the target object a may be obtained based on the mean vector and the direction vector corresponding to the target object a, and multiple sampling may be performed from the obtained normal distribution to obtain at least one image feature vector corresponding to the target object a.
The number of the at least one image feature vector corresponding to each target object can be determined according to the number of the predicted tracks required to be obtained. For example, if the number of predicted trajectories corresponding to each target object is 3, 3 image feature vectors corresponding to the target object may be generated based on the mean vector and the variance vector corresponding to each target object.
In the above embodiment, the corresponding mean vector and variance vector are generated for each target object, and then for the mean vector and variance vector corresponding to each target object, the corresponding at least one image feature vector is determined for each target object, and then when the predicted trajectory of the target object is determined by combining the at least one image feature vector corresponding to the target object, the accuracy of the predicted trajectory can be improved.
In an alternative embodiment, the method of predicting trajectories is performed by a neural network, the neural network being trained by:
the method comprises the steps of firstly, obtaining training samples, wherein the training samples are provided with labeling tracks, and each training sample comprises a scene image sample and sample historical position information of a sample object.
And secondly, generating at least one predicted track corresponding to the training sample based on the training sample by utilizing the neural network.
Randomly selecting two hidden feature vectors from the hidden feature vectors corresponding to the historical position information of each sample, and determining the regular loss based on the two selected hidden feature vectors; and generating a predicted loss based on the predicted trajectory and the annotated trajectory.
And fourthly, adjusting network parameters of the neural network based on the prediction loss and the regular loss to obtain the trained neural network.
The training sample may include a scene image sample and sample historical position information of a sample object, where the sample historical position information is position information of the sample object in the scene image. Meanwhile, the training samples have marked tracks. And inputting the obtained training samples into the neural network, and generating at least one predicted track corresponding to the training samples. Furthermore, a loss value of the neural network can be determined, that is, the loss value adjusts network parameters of the neural network until the neural network after the parameter adjustment meets a set condition, wherein the set condition can be that the loss value of the neural network is smaller than a set loss threshold value; alternatively, the detection accuracy of the neural network may be greater than a set accuracy threshold, and the like.
Here, the loss value of the neural network may include a regular loss and a predicted loss, that is, the calculation formula of the loss value of the neural network may be the following formula (10):
Loss=λ×Lc+Lp; (10)
wherein L iscIs a regular loss; l ispTo predict loss, λ is a trade-off parameter.
Loss of regularity LcThe calculation formula of (2) is the following formula (11):
Figure BDA0002645176610000191
wherein,
Figure BDA0002645176610000192
for random selectionTwo latent feature vectors of (2). For example, for the target object a, q is 3, if two selected hidden feature vectors are
Figure BDA0002645176610000193
Then | t1-t2|=5>3; if two selected hidden feature vectors are
Figure BDA0002645176610000194
Then | t1-t2|=2<3. m is a hyper-parameter, and the smaller the value of m is, the higher the regularization degree is. The value of m can be set as required.
Predicted loss LpThe calculation formula of (2) is the following formula (12):
Figure BDA0002645176610000195
wherein,
Figure BDA0002645176610000196
a plurality of predicted position information included in the predicted trajectory;
Figure BDA0002645176610000197
and marking a plurality of marking position information included in the marking track.
In the above embodiment, considering that the movement trajectory of the target object is a coherent motion process, motion features close in time generally have correlation, and motion features far in time generally do not correlate. Therefore, two hidden feature vectors can be randomly selected from the hidden feature vectors corresponding to the historical position information of each sample, regular loss is determined based on the two selected hidden feature vectors, and the neural network is trained through the prediction loss and the increased regular loss, so that the accuracy of the trained neural network is higher.
For example, referring to fig. 3, a schematic flowchart of a method for predicting a trajectory, which is based on a neural network to determine a predicted trajectory, is described with reference to fig. 3. As can be seen, 3 target objects are included in the scene image, and 3 predicted trajectories are determined for each target object.
Illustratively, a plurality of historical position information of each target object in 3 target objects is input into an Encoder Encoder (the neural network may be LSTM) of the neural network, and a personal Context awareness Module (ICM) in the Encoder is based on the historical position information of each target object
Figure BDA0002645176610000201
Feature vector queue matching with target historical position information of each target object
Figure BDA0002645176610000202
Generating target historical location information for each target object
Figure BDA0002645176610000203
Corresponding hidden feature vector
Figure BDA0002645176610000204
And obtaining an updated feature vector queue
Figure BDA0002645176610000205
Queue the updated feature vectors of each target object
Figure BDA0002645176610000206
Inputting the data into a Social-aware Context Module (SCM), and respectively matching
Figure BDA0002645176610000207
Each hidden feature vector in the sequence is adjusted to obtain an adjusted feature vector queue
Figure BDA0002645176610000208
And queue the adjusted feature vectors
Figure BDA0002645176610000209
Determined as the next target historical location information
Figure BDA00026451766100002010
Matching the characteristic vector queue, returning to the next target historical position information
Figure BDA00026451766100002011
And the next target historical position information
Figure BDA00026451766100002012
Inputting the matched feature vector queue to an ICM module for processing until obtaining an adjusted hidden feature vector corresponding to the last historical position information of each target object
Figure BDA00026451766100002013
Meanwhile, the scene image is input into a Pre-trained Semantic detection neural Network Pre-trained Network to obtain a Semantic image Semantic Map corresponding to the scene image. And performing feature extraction on the semantic image to obtain an image feature vector corresponding to the semantic image. And respectively fusing the image characteristic vector with a plurality of pieces of continuous historical position information of each target object, and inputting the fused characteristic vector corresponding to each target object into the full connection layer FC for processing to obtain a mean vector and a variance vector corresponding to the target object. That is, here, the mean vector and the variance vector of each of the 3 target objects can be obtained.
Furthermore, 3 image feature vectors z to N (μ, σ) corresponding to each target object can be generated for the target object by using a re-parameterization technique with respect to the mean vector and the variance vector of the target object. Here, a normal distribution of each target object may be obtained for the mean vector and the variance vector of each target object. Performing first sampling from the normal distribution of each target object to obtain a first image characteristic vector corresponding to each target object; performing second sampling from the normal distribution of each target object to obtain a second image characteristic vector corresponding to each target object; and performing third sampling from the normal distribution of each target object to obtain a third image feature vector corresponding to each target object. Namely, 3 image feature vectors corresponding to each target object can be obtained.
Then, the adjusted hidden feature vector of each target object is used
Figure BDA0002645176610000211
Respectively cascaded with the corresponding 3 image feature vectors (i.e. the adjusted hidden feature vector of the first target object)
Figure BDA0002645176610000212
And cascading with the 3 image feature vectors corresponding to the first target object) to obtain 3 predicted feature vectors corresponding to each target object. And inputting the 3 prediction characteristic vectors corresponding to the target objects into a Decoder, so as to obtain 3 prediction tracks corresponding to each target object.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides a trajectory prediction apparatus, as shown in fig. 4, an architecture diagram of the trajectory prediction apparatus provided in the embodiment of the present disclosure includes a first determining module 401, a second determining module 402, a third determining module 403, a generating module 404, an adjusting module 405, a fourth determining module 406, and a training module 407, specifically:
a first determining module 401, configured to determine, according to a motion trajectory of a target object, multiple pieces of historical position information of the target object;
a second determining module 402, configured to use one of the plurality of historical location information as target historical location information, and determine, according to hidden feature vectors corresponding to a plurality of pieces of historical location information before the target historical location information, a feature vector queue matched with the target historical location information;
a third determining module 403, configured to determine at least one predicted trajectory of the target object according to the last historical location information of the target object and the eigenvector queue matched with the last historical location information.
In one possible embodiment, after determining the feature vector queue matching the target historical location information, the apparatus further includes:
a generating module 404, configured to generate a hidden feature vector corresponding to the target historical position information according to the target historical position information and a feature vector queue matched with the target historical position information;
a third determining module 403, configured to, when determining at least one predicted trajectory of the target object according to the last historical location information of the target object and the feature vector queue matched with the last historical location information,:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
and determining at least one predicted track of the target object according to the hidden feature vector corresponding to the last historical position information.
In a possible implementation manner, the generating module 404 is configured to determine a hidden feature vector corresponding to the target historical location information according to the following steps:
generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, each hidden feature vector in the feature vector queue matched with the target historical position information, and a memory feature vector corresponding to each hidden feature vector.
In one possible implementation, the generating module 404, when generating the hidden feature vector corresponding to the target historical location information based on the target historical location information, each hidden feature vector in the feature vector queue matched with the target historical location information, and the memory feature vector corresponding to each hidden feature vector, is configured to:
generating an average feature vector corresponding to the feature vector queue based on each hidden feature vector in the feature vector queue matched with the target historical position information;
generating a memory feature vector corresponding to the target historical position information based on the target historical position information, the average feature vector, each hidden feature vector in the feature vector queue, and a memory feature vector corresponding to each hidden feature vector in the feature vector queue;
and generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, the average feature vector and a memory feature vector corresponding to the target historical position information.
In a possible implementation manner, the method, after generating a hidden feature vector corresponding to the historical target location information according to the historical target location information and a feature vector queue matched with the historical target location information, further includes:
an adjusting module 405, configured to adjust, based on hidden feature vectors corresponding to multiple pieces of historical location information of each of the multiple target objects, a hidden feature vector corresponding to the target historical location information of a first target object and a hidden feature vector corresponding to historical location information before the target historical location information, so as to generate an adjusted hidden feature vector corresponding to the target historical location information; the first target object is any one of a plurality of target objects.
The third determining module 403, determining at least one predicted trajectory of the target object according to the hidden feature vector corresponding to the last historical location information, includes:
and determining at least one predicted track of the target object according to the adjusted hidden feature vector corresponding to the last historical position information.
In one possible embodiment, the adjusting module 405, when adjusting, based on hidden feature vectors corresponding to a plurality of pieces of historical location information of each of the plurality of target objects, a hidden feature vector corresponding to the target historical location information of a first target object and a hidden feature vector corresponding to historical location information before the target historical location information to generate an adjusted hidden feature vector corresponding to the target historical location information, is configured to:
generating association degrees between the first target object and the plurality of target objects respectively based on any hidden feature vector to be adjusted corresponding to the first target object and hidden feature vectors of other target objects corresponding to historical position information, and generating normalization factors based on the association degrees;
and generating an adjusted hidden feature vector corresponding to any hidden feature vector to be adjusted based on the hidden feature vector to be adjusted, the normalization factor, the association degree between the first target object and the plurality of target objects respectively, and the linearly transformed hidden feature vectors corresponding to other target objects.
In a possible embodiment, the apparatus further comprises:
a fourth determining module 406, configured to determine, according to a scene image of a scene where the target object is located, at least one image feature vector corresponding to the target object;
a third determining module 403, configured to, when determining at least one predicted trajectory of the target object according to the last historical location information of the target object and the feature vector queue matched with the last historical location information,:
and determining at least one predicted track of the target object according to the last historical position information of the target object, the feature vector queue matched with the last historical position information and at least one image feature vector corresponding to the target object.
In a possible implementation, the third determining module 403, when determining at least one predicted trajectory of the target object according to the last historical position information of the target object, the feature vector queue matching the last historical position information, and the at least one image feature vector corresponding to the target object, is configured to:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
cascading the hidden feature vector corresponding to the last historical position information of each target object with at least one image feature vector corresponding to the target object respectively to generate at least one predicted feature vector;
and generating at least one predicted track corresponding to the target object based on each predicted feature vector.
In a possible implementation manner, the fourth determining module 406, when determining at least one image feature vector corresponding to the target object according to the scene image, is configured to:
determining a semantic image corresponding to the scene image based on the scene image;
extracting the features of the semantic image to obtain an intermediate feature vector corresponding to the semantic image;
obtaining a mean vector and a variance vector corresponding to the target object based on the intermediate feature vector and the plurality of historical position information of the target object;
and generating at least one image feature vector corresponding to the target object based on the mean vector and the variance vector corresponding to the target object.
In a possible implementation, the method for predicting a trajectory is performed by a neural network, the apparatus further includes a training module 407, and the training module 407 is configured to train the neural network by:
acquiring training samples, wherein the training samples have labeling tracks, and each training sample comprises a scene image sample and sample historical position information of a sample object;
generating at least one predicted track corresponding to the training sample based on the training sample by using the neural network;
randomly selecting two hidden feature vectors from the hidden feature vectors corresponding to the historical position information of each sample, and determining the regular loss based on the two selected hidden feature vectors; generating a prediction loss based on the prediction track and the labeling track;
and adjusting network parameters of the neural network based on the prediction loss and the regular loss to obtain the trained neural network.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 5, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the electronic device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
determining a plurality of historical position information of a target object according to the motion track of the target object;
taking one piece of historical position information in the plurality of pieces of historical position information as target historical position information, and determining a feature vector queue matched with the target historical position information according to hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information;
and determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.
Furthermore, the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method for trajectory prediction described in the above method embodiments.
The computer program product of the method for predicting a trajectory provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the method for predicting a trajectory in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A method of trajectory prediction, comprising:
determining a plurality of historical position information of a target object according to the motion track of the target object;
taking one piece of historical position information in the plurality of pieces of historical position information as target historical position information, and determining a feature vector queue matched with the target historical position information according to hidden feature vectors corresponding to a plurality of pieces of historical position information before the target historical position information;
and determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.
2. The method of claim 1, wherein after determining a feature vector queue that matches the target historical location information, the method further comprises:
generating a hidden feature vector corresponding to the historical target position information according to the historical target position information and a feature vector queue matched with the historical target position information;
determining at least one predicted trajectory of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information, wherein the step of determining the at least one predicted trajectory of the target object comprises the following steps:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
and determining at least one predicted track of the target object according to the hidden feature vector corresponding to the last historical position information.
3. The method of claim 2, wherein determining the hidden feature vector corresponding to the historical location information of the target according to the following steps comprises:
generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, each hidden feature vector in the feature vector queue matched with the target historical position information, and a memory feature vector corresponding to each hidden feature vector.
4. The method of claim 3, wherein generating a hidden feature vector corresponding to the target historical location information based on the target historical location information, each hidden feature vector in the feature vector queue that matches the target historical location information, and a memory feature vector corresponding to the each hidden feature vector comprises:
generating an average feature vector corresponding to the feature vector queue based on each hidden feature vector in the feature vector queue matched with the target historical position information;
generating a memory feature vector corresponding to the target historical position information based on the target historical position information, the average feature vector, each hidden feature vector in the feature vector queue, and a memory feature vector corresponding to each hidden feature vector in the feature vector queue;
and generating a hidden feature vector corresponding to the target historical position information based on the target historical position information, the average feature vector and a memory feature vector corresponding to the target historical position information.
5. The method according to claim 2, wherein the target objects are a plurality of target objects, and after generating the hidden feature vector corresponding to the target historical location information according to the target historical location information and the feature vector queue matched with the target historical location information, the method further comprises:
based on hidden feature vectors corresponding to a plurality of pieces of historical position information of each target object in the plurality of target objects, adjusting the hidden feature vector corresponding to the target historical position information of a first target object and the hidden feature vector corresponding to the historical position information before the target historical position information to generate an adjusted hidden feature vector corresponding to the target historical position information; the first target object is any one of a plurality of target objects;
determining at least one predicted trajectory of the target object according to the hidden feature vector corresponding to the last historical position information, including:
and determining at least one predicted track of the target object according to the adjusted hidden feature vector corresponding to the last historical position information.
6. The method of claim 5, wherein the adjusting the hidden feature vector corresponding to the target historical position information of the first target object and the hidden feature vector corresponding to the historical position information before the target historical position information based on the hidden feature vectors corresponding to the plurality of pieces of historical position information of each of the plurality of target objects to generate an adjusted hidden feature vector corresponding to the target historical position information comprises:
generating association degrees between the first target object and the plurality of target objects respectively based on any hidden feature vector to be adjusted corresponding to the first target object and hidden feature vectors of other target objects corresponding to historical position information, and generating normalization factors based on the association degrees;
and generating an adjusted hidden feature vector corresponding to any hidden feature vector to be adjusted based on the hidden feature vector to be adjusted, the normalization factor, the association degree between the first target object and the plurality of target objects respectively, and the linearly transformed hidden feature vectors corresponding to other target objects.
7. The method according to any one of claims 1 to 6, further comprising:
determining at least one image characteristic vector corresponding to the target object according to the scene image of the scene where the target object is located;
determining at least one predicted trajectory of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information, wherein the step of determining the at least one predicted trajectory of the target object comprises the following steps:
and determining at least one predicted track of the target object according to the last historical position information of the target object, the feature vector queue matched with the last historical position information and at least one image feature vector corresponding to the target object.
8. The method of claim 7, wherein determining at least one predicted trajectory of the target object according to the last historical position information of the target object, a feature vector queue matched with the last historical position information, and at least one image feature vector corresponding to the target object comprises:
generating a hidden feature vector corresponding to the last historical position information according to the last historical position information of the target object and a feature vector queue matched with the last historical position information;
respectively cascading the hidden feature vector corresponding to the last historical position information with at least one image feature vector corresponding to the target object to generate at least one predicted feature vector;
and generating at least one predicted track corresponding to the target object based on each predicted feature vector.
9. The method according to claim 7 or 8, wherein determining at least one image feature vector corresponding to the target object according to the scene image comprises:
determining a semantic image corresponding to the scene image based on the scene image;
extracting the features of the semantic image to obtain an intermediate feature vector corresponding to the semantic image;
obtaining a mean vector and a variance vector corresponding to the target object based on the intermediate feature vector and the plurality of historical position information of the target object;
and generating at least one image feature vector corresponding to the target object based on the mean vector and the variance vector corresponding to the target object.
10. The method according to any one of claims 1 to 9, wherein the trajectory prediction method is performed by a neural network, the neural network being trained by:
acquiring training samples, wherein the training samples have labeling tracks, and each training sample comprises a scene image sample and sample historical position information of a sample object;
generating at least one predicted track corresponding to the training sample based on the training sample by using the neural network;
randomly selecting two hidden feature vectors from the hidden feature vectors corresponding to the historical position information of each sample, and determining the regular loss based on the two selected hidden feature vectors; generating a prediction loss based on the prediction track and the labeling track;
and adjusting network parameters of the neural network based on the prediction loss and the regular loss to obtain the trained neural network.
11. An apparatus for trajectory prediction, comprising:
the first determination module is used for determining a plurality of historical position information of a target object according to the motion track of the target object;
a second determining module, configured to use one piece of historical location information in the plurality of pieces of historical location information as target historical location information, and determine, according to hidden feature vectors corresponding to a plurality of pieces of historical location information before the target historical location information, a feature vector queue that matches the target historical location information;
and the third determining module is used for determining at least one predicted track of the target object according to the last historical position information of the target object and the feature vector queue matched with the last historical position information.
12. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the method of trajectory prediction according to any of claims 1 to 10.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of trajectory prediction according to one of claims 1 to 10.
CN202010852421.2A 2020-08-21 2020-08-21 Track prediction method, track prediction device, electronic equipment and storage medium Active CN112000756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852421.2A CN112000756B (en) 2020-08-21 2020-08-21 Track prediction method, track prediction device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852421.2A CN112000756B (en) 2020-08-21 2020-08-21 Track prediction method, track prediction device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112000756A true CN112000756A (en) 2020-11-27
CN112000756B CN112000756B (en) 2024-09-17

Family

ID=73474037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852421.2A Active CN112000756B (en) 2020-08-21 2020-08-21 Track prediction method, track prediction device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112000756B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112677993A (en) * 2021-01-05 2021-04-20 北京三快在线科技有限公司 Model training method and device
CN113033364A (en) * 2021-03-15 2021-06-25 商汤集团有限公司 Trajectory prediction method, trajectory prediction device, travel control method, travel control device, electronic device, and storage medium
CN113879333A (en) * 2021-09-30 2022-01-04 深圳市商汤科技有限公司 Trajectory prediction method and apparatus, electronic device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180357258A1 (en) * 2015-06-05 2018-12-13 Beijing Jingdong Shangke Information Technology Co., Ltd. Personalized search device and method based on product image features
CN109784420A (en) * 2019-01-29 2019-05-21 深圳市商汤科技有限公司 A kind of image processing method and device, computer equipment and storage medium
CN111091708A (en) * 2019-12-13 2020-05-01 中国科学院深圳先进技术研究院 Vehicle track prediction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180357258A1 (en) * 2015-06-05 2018-12-13 Beijing Jingdong Shangke Information Technology Co., Ltd. Personalized search device and method based on product image features
CN109784420A (en) * 2019-01-29 2019-05-21 深圳市商汤科技有限公司 A kind of image processing method and device, computer equipment and storage medium
CN111091708A (en) * 2019-12-13 2020-05-01 中国科学院深圳先进技术研究院 Vehicle track prediction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
欧阳俊等: "基于GAN和注意力机制的行人轨迹预测", 激光与光电子学进展, vol. 57, no. 14, pages 2 - 5 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112677993A (en) * 2021-01-05 2021-04-20 北京三快在线科技有限公司 Model training method and device
CN113033364A (en) * 2021-03-15 2021-06-25 商汤集团有限公司 Trajectory prediction method, trajectory prediction device, travel control method, travel control device, electronic device, and storage medium
CN113879333A (en) * 2021-09-30 2022-01-04 深圳市商汤科技有限公司 Trajectory prediction method and apparatus, electronic device, and storage medium
CN113879333B (en) * 2021-09-30 2023-08-22 深圳市商汤科技有限公司 Track prediction method, track prediction device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112000756B (en) 2024-09-17

Similar Documents

Publication Publication Date Title
Pfeiffer et al. A data-driven model for interaction-aware pedestrian motion prediction in object cluttered environments
Chen et al. Selective sensor fusion for neural visual-inertial odometry
CN112000756A (en) Method and device for predicting track, electronic equipment and storage medium
Kim et al. Probabilistic vehicle trajectory prediction over occupancy grid map via recurrent neural network
Bhattacharyya et al. Long-term on-board prediction of people in traffic scenes under uncertainty
US10896382B2 (en) Inverse reinforcement learning by density ratio estimation
CN108460427B (en) Classification model training method and device and classification method and device
US10896383B2 (en) Direct inverse reinforcement learning with density ratio estimation
Hou et al. Real-time body tracking using a gaussian process latent variable model
Cheng et al. Pedestrian trajectory prediction via the Social‐Grid LSTM model
Bastani et al. Online nonparametric bayesian activity mining and analysis from surveillance video
Shi et al. Social interpretable tree for pedestrian trajectory prediction
Liu et al. An integrated approach to probabilistic vehicle trajectory prediction via driver characteristic and intention estimation
Zhang et al. Extended social force model‐based mean shift for pedestrian tracking under obstacle avoidance
CN110955965A (en) Pedestrian motion prediction method and system considering interaction
Nayak et al. Uncertainty estimation of pedestrian future trajectory using Bayesian approximation
CN112989962A (en) Track generation method and device, electronic equipment and storage medium
Mohammed et al. Microscopic modeling of cyclists on off-street paths: a stochastic imitation learning approach
CN116859931A (en) Training method of track planning model, vehicle control mode and device
CN112418432A (en) Analyzing interactions between multiple physical objects
Tumu et al. Physics constrained motion prediction with uncertainty quantification
Katyal et al. Occupancy map prediction using generative and fully convolutional networks for vehicle navigation
Li et al. Multiple extended target tracking by truncated JPDA in a clutter environment
Stutts et al. Lightweight, uncertainty-aware conformalized visual odometry
WO2021095509A1 (en) Inference system, inference device, and inference method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant