CN113291321A - Vehicle track prediction method, device, equipment and storage medium - Google Patents

Vehicle track prediction method, device, equipment and storage medium Download PDF

Info

Publication number
CN113291321A
CN113291321A CN202110666123.9A CN202110666123A CN113291321A CN 113291321 A CN113291321 A CN 113291321A CN 202110666123 A CN202110666123 A CN 202110666123A CN 113291321 A CN113291321 A CN 113291321A
Authority
CN
China
Prior art keywords
information
target
vehicle
scene
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110666123.9A
Other languages
Chinese (zh)
Inventor
李垚
崔迪潇
王通
陈恩泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhijia Technology Co Ltd
PlusAI Corp
Original Assignee
Suzhou Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhijia Technology Co Ltd filed Critical Suzhou Zhijia Technology Co Ltd
Priority to CN202110666123.9A priority Critical patent/CN113291321A/en
Publication of CN113291321A publication Critical patent/CN113291321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • B60W60/00276Planning or execution of driving tasks using trajectory prediction for other traffic participants for two or more other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • B60W60/00274Planning or execution of driving tasks using trajectory prediction for other traffic participants considering possible movement changes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the specification provides a vehicle track prediction method, a vehicle track prediction device, vehicle track prediction equipment and a storage medium. The method comprises the following steps: acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle; determining time information corresponding to the scene information; calculating the spatial position information of each target vehicle by combining the scene information and the time information; constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles; and integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles. The method accurately and effectively predicts the running tracks of different target vehicles, effectively formulates a subsequent automatic driving strategy according to the running tracks of other vehicles, and ensures the safe driving of the automatic driving vehicle.

Description

Vehicle track prediction method, device, equipment and storage medium
Technical Field
The embodiment of the specification relates to the technical field of automatic driving, in particular to a vehicle track prediction method, a vehicle track prediction device and a storage medium.
Background
In recent years, the unmanned technology has been developed rapidly, and unmanned driving is becoming a reality. When the unmanned vehicle runs on an actual road surface, not only the decision role of the road on running but also the influence of other vehicles on the road on the running process need to be considered. Therefore, when external environment information is collected, the driving tracks of other vehicles on the road also need to be predicted, so that the driving track of the current vehicle can be better formulated, and the safety of automatic driving is ensured.
In practice, there is often more than one other vehicle around an autonomous vehicle. The vehicles can influence each other during the running process, and the influence degree of different vehicles is different. Therefore, when predicting the travel tracks of other vehicles, not only the influence of the road conditions on the travel strategies of other vehicles needs to be analyzed, but also the interference between different vehicles on the travel tracks needs to be considered. How to accurately determine the running tracks of other vehicles has important significance for making the running track of the current vehicle. Therefore, a technical solution capable of accurately predicting the travel trajectory of other vehicles around the autonomous vehicle is needed.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a vehicle trajectory prediction method, apparatus, device and storage medium, so as to solve the problem of how to predict a vehicle trajectory based on interaction influence conditions among multiple vehicles.
In order to solve the above technical problem, an embodiment of the present specification provides a vehicle trajectory prediction method, including: acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle; determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time; calculating the spatial position information of each target vehicle by combining the scene information and the time information; constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles; and integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
An embodiment of the present specification further provides a vehicle trajectory prediction apparatus, including: the scene information acquisition module is used for acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle; a time information determination module for determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time; the spatial position information calculation module is used for calculating spatial position information of each target vehicle by combining the scene information and the time information; the mixed environment information construction module is used for constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles; and the predicted track acquisition module is used for integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
The embodiment of the present specification further provides a vehicle trajectory prediction device, which includes a memory and a processor; the memory to store computer program instructions; the processor to execute the computer program instructions to implement the steps of: acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle; determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time; calculating the spatial position information of each target vehicle by combining the scene information and the time information; constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles; and integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
Embodiments of the present specification also provide a computer storage medium having a computer program stored thereon, where the computer program, when executed, implements the vehicle trajectory prediction method described above.
The embodiment of the specification also provides an automatic driving vehicle which is provided with the vehicle track prediction device.
As can be seen from the technical solutions provided by the embodiments of the present specification, after scene information corresponding to a plurality of target vehicles is acquired, the embodiments of the present specification extract corresponding time information from the scene information to determine the positions of the vehicles at each time. And correspondingly recalculating the spatial position information of each target vehicle based on the scene information and the time information corresponding to the target vehicles, determining the influence conditions among the vehicles based on the spatial position and the time information of the vehicles, and finally combining the influence conditions among the vehicles and the historical tracks of each target vehicle to obtain the predicted tracks corresponding to each target vehicle. By the method, when other target vehicles exist around the automatic driving vehicle, the influence relationship among the vehicles can be determined according to the positions of the target vehicles at different moments, so that the driving tracks of different target vehicles can be accurately and effectively predicted under the condition that the influence caused by the vehicles is considered, a subsequent automatic driving strategy can be effectively formulated according to the driving tracks of other vehicles, and the safe driving of the automatic driving vehicle is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the specification, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a vehicle trajectory prediction method according to an embodiment of the present disclosure;
FIG. 2 is a block diagram of an LSTM in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an exemplary architecture of an attention network according to an embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating a vehicle trajectory prediction process according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a vehicle trajectory prediction device according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of a vehicle trajectory prediction device according to an embodiment of the present disclosure;
fig. 7 is a scene schematic diagram of automatic driving of a vehicle according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort shall fall within the protection scope of the present specification.
In order to solve the above technical problem, an embodiment of the present specification provides a vehicle trajectory prediction method. Specifically, the vehicle trajectory prediction method may be implemented by a vehicle trajectory prediction apparatus that may be applied to an autonomous vehicle to predict a travel trajectory of vehicles around the autonomous vehicle to guide an unmanned driving process of the autonomous vehicle. As shown in fig. 1, the vehicle trajectory prediction method includes the following specific implementation steps.
S110: acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle.
When the automatic driving vehicle runs on a road, attention needs to be paid to not only route information corresponding to the road, traffic identification and prompt information of traffic lights, but also influence of other vehicles on the road on the current vehicle. The driving route of the current vehicle may be affected to different degrees based on the current position, driving speed, steering information, and the like of other vehicles. Therefore, it is necessary to predict the travel tracks of these vehicles and to make the travel track of the current autonomous vehicle in combination with the travel tracks of other vehicles.
The target vehicle may be an autonomous vehicle driving course, the determined vehicle possibly having an influence on the driving course of the current autonomous vehicle.
In practical applications, the target vehicles may be screened based on a certain condition, for example, a vehicle whose distance from the current autonomous vehicle is within a certain range may be used as the target vehicle, or a vehicle whose speed exceeds a certain range may be used as the target vehicle. In practical application, the target vehicle may also be determined by other determination conditions or by combining a plurality of determination conditions, which is not described herein again.
The scene information may be used to describe the driving state and/or driving environment of the target vehicle. The context information may be data collected for the target vehicle after the target vehicle is determined. For example, the scene information may include information corresponding to the target vehicle, such as a traveling speed, a traveling direction, and a position relative to the current vehicle of the target vehicle, and may also include information on a road on which the target vehicle is located. The scene information obtained in the practical application may be other types of information, such as traffic information, climate information, and the like, and is not limited to the above examples, and is not described herein again.
It should be noted that the scene information may be obtained by configuring a corresponding sensor in the vehicle trajectory prediction device to collect corresponding environment information, or by receiving information data transmitted by other devices or remote devices in the vehicle as the scene information, or by combining the two methods. In practical application, the manner of acquiring the scene information may be flexibly adjusted according to needs, and is not limited to the above manner, which is not described herein again.
Since the scene information is generally obtained by respectively acquiring information of different target vehicles by using different sensing devices, or information obtained by separating the scene information from the overall data after the overall data is acquired for different target vehicles, the scene information corresponding to different target vehicles generally does not have strong correlation in time, and therefore, it is necessary to extract relevant time information from the scene information to better judge the distribution state of the target vehicles at each time.
In some embodiments, after the scene information is acquired, in order to ensure the quality of the scene information and improve the accuracy of calculation or processing in subsequent steps, the scene information may be further preprocessed. The preprocessing may be denoising processing for removing noise in the scene information, or completion processing for completing a missing part in the scene information. The specific preprocessing process can be set based on the requirements of the actual application, and is not described herein again.
By preprocessing the scene information, the effect of the acquired scene information on the actual application can be further ensured, the effective proceeding of the subsequent steps is ensured, and the accuracy of the final track prediction result is improved.
S120: determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time.
The time information can be used to describe the change of the scene information with time. When the scene information is directly collected, the scene information may directly include the moving condition of the target vehicle within a period of time, and because more target vehicles exist in the current scene, corresponding time information can be extracted from the scene information, so that the corresponding relation between the position of the target vehicle and the time is calibrated under the condition of combining the time series information, and the analysis and calculation in the subsequent process are facilitated.
In some embodiments, time information may be extracted from the context information using a first neural network model that includes a model constructed based on a long-short term memory network. Long Short-Term Memory networks (LSTM) are a time-cycled neural network suitable for processing and predicting significant events of very Long intervals and delays in a time series. The LSTM is used for receiving a series of input features, obtaining one output once per cycle through internal matrix kernel cycle calculation, and finally obtaining a plurality of output features through multiple cycles.
The structure of the first neural network model is described with reference to fig. 2, where the first neural network model may include a plurality of LSTM cores, and after inputting the memory information into one of the LSTM cores and inputting the corresponding input feature 1 into the LSTM core, the LSTM may obtain the memory information and the output feature 1 based on the corresponding processing logic. And taking the obtained memory information as the input of the next LSTM kernel, and inputting the input characteristic 2 into the next LSTM kernel to further obtain the output characteristic 2 and the corresponding memory information. And repeating the steps until the circulation condition is reached, and enabling all the finally obtained output characteristics to be the final output result.
In this embodiment, the scene information may be input into the corresponding LSTM kernel as a plurality of input features, and the corresponding time information may be sequentially extracted from the scene information. The specific logic setting process of the LSTM kernel may be set according to the requirements of the actual application, and is not described herein again.
Before applying the first neural network model, the first neural network model may be trained in advance, for example, the first neural network model may be trained and optimized by using a gradient descent method. The specific training process may be set based on the requirements of the actual application, and is not described herein again.
In practical applications, the time information may also be extracted in other manners, for example, other machine learning models are adopted, or corresponding time information is derived by comparing the position change relationship between the target vehicles, which is not limited to the above embodiments and is not described herein again
S130: and calculating the spatial position information of each target vehicle by combining the scene information and the time information.
After the time information is acquired, spatial position information of each target vehicle may be calculated in combination with the scene information and the time information. The spatial position information may be used to indicate spatial positions of the target vehicle at different times, and specifically, the spatial position information may be determined by relative coordinate positions of the target vehicle at different times.
In some implementations, an input matrix can be constructed based on the scene information and the temporal information. The input matrix may be used to represent the coordinates of each target vehicle at different times, for example, a matrix of size M × N × 2 may be constructed using the coordinates of N dollies within the past M frames. And determining the spatial position information of each target vehicle according to the position relation among the target vehicles at each moment in the input matrix so as to determine the relative positions among different target vehicles.
In some embodiments, the scene information and the time information may be input to a position calculation model to obtain spatial position information of each target vehicle. The position calculation model comprises a model constructed based on a graph attention network. The graph Attention network GAT performs aggregation operation on neighbor nodes through an Attention Mechanism (Attention Mechanism), and achieves adaptive distribution of different neighbor weights. The core of the attention mechanism is to perform weight distribution on given information, and information with high weight means that the system needs to perform emphasis processing.
Specifically, the location calculation model constructed based on the graph attention network may include at least two aggregation nodes, and each aggregation node may receive node information transmitted by an initial node. The initial node may be configured to respectively correspond to the scene information of each target vehicle at the same time, for example, the initial node may be configured to indicate coordinates corresponding to positions of each target vehicle at different times. The node information obtained through each initial node can determine influence coefficients of different initial nodes, and the influence coefficients can be used for representing the proportion of different target vehicles in the vehicle track prediction process. Node information may be aggregated based on the impact coefficients. The aggregated information can be transmitted to the next layer of nodes to be aggregated to further aggregate the information until all the information is aggregated to obtain the final spatial position information.
For further explanation with reference to a specific example, as shown in fig. 3, nodes 1, 2, and 3 respectively represent target vehicles traveling in an application scenario. For nodes in the graph structure, the information transmitted by other nodes is calculated by using the corresponding matrix, and the attention corresponding to different nodes is calculated by using another matrix, wherein the nodes with larger attention calculation values occupy larger proportion in the fusion process, and the nodes with smaller attention have smaller influence in the calculation process. After interactive calculation based on different nodes, other vehicles interact enough information with the target vehicle, and the target vehicle further updates the state of the target vehicle by using the aggregated information, so that information interaction and fusion among a plurality of target vehicles are formed, and corresponding spatial position information is obtained.
It should be noted that in practical applications, the input matrix and the position calculation model in the two embodiments may be combined to perform comprehensive calculation on the spatial position information, so as to more accurately and effectively acquire the corresponding spatial position information.
S140: constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used to represent interactive influence conditions between target vehicles.
After the spatial location information and the time information are acquired, hybrid environment information may be constructed based on the spatial location information and the time information. The hybrid environment information is used to represent interactive influence conditions between the target vehicles. After the spatial position information and the time information are acquired from the scene information, the information is more used for representing the change situation of the position of a single target vehicle along with the time, and the interaction situation between the target vehicles cannot be well reflected, so that the spatial position information and the time information can be deeply mixed to obtain mixed environment information, and the track prediction can be better realized in the subsequent steps.
In particular, in some embodiments, the spatial location information may be input to a second neural network model to obtain hybrid spatial information. The second neural network model comprises a model constructed based on a long-term and short-term memory network and is used for mixing spatial position information so as to deeply discover influence conditions among different vehicles. After the mixed spatial information and the time information are fused, mixed environment information can be constructed, so that the running track of the target vehicle can be well predicted on the basis of the influence degree between different target vehicles in the subsequent steps.
For the specific structure of the second neural network, reference may be made to the expression of the structure of the first neural network in step S120, and details are not repeated here.
S150: and integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
After the hybrid environment information is obtained, the influence coefficient between different target vehicles can be determined for the position situation of each target vehicle and the influence degree between the target vehicles. Further, since the spatial position corresponding to the target vehicle is also included in the mixed environment information, the predicted trajectory corresponding to each target vehicle can be acquired according to the historical trajectory of each target vehicle.
In some embodiments, the hybrid environmental information, the time information, and the historical trajectories of the respective target vehicles may be input to a third neural network model to obtain predicted trajectories corresponding to the respective target vehicles. The third neural network model comprises a model constructed based on a long-term and short-term memory network and is used for predicting the running track of each vehicle according to the influence relation and the historical track between the target vehicles.
For the specific structure of the third neural network, reference may be made to the expression of the structure of the first neural network in step S120, and details are not repeated here.
In some embodiments, while the hybrid environment information, the time information, and the historical trajectories of the respective target vehicles are input into the third neural network model, gaussian noise may also be input into the third neural network model as input information. The Gaussian noise mainly has the effect of introducing uncertainty, so that the third neural network model becomes a probability generation model, the predicted trajectory is more consistent with the situation in practical application, and the adaptive effect of the generated predicted trajectory to different situations is ensured.
In practical application, other factors may be combined to perform calculation when determining the predicted trajectory, which is not limited to the above factors and will not be described in detail.
Based on the description of the above embodiments, the implementation of the above method is comprehensively described by using a specific example. As shown in fig. 4, which is a specific flowchart of trajectory prediction of the above method, after the raw information of the on-board sensors is acquired, the raw information is preprocessed to obtain scene information of multiple vehicles, and the scene information is input into LSTM No. 1, and the time information is extracted from the scene information. And inputting the obtained time information and the scene information into an attention diagram network to obtain the spatial position information of the target vehicle. The output information of the attention-seeking network is input into LSTM No. 2, and the spatial position information is further mixed to determine the influence relationship of the target vehicles with each other. The output of LSTM 2, the output of attention-seeking network and scene information are input into LSTM 3, and the future driving tracks of these vehicles are predicted.
A concrete procedure of processing data based on the model drawn by the above-described flow, wherein scene information e of a plurality of target vehicles is1、e2、e3、e4After the M-LSTM is respectively input, capturing time information from the scene information by using the M-LSTM, and inputting the scene information into each GAT according to the time information, wherein the input information can be adjacent node scene information at the same time, and outputting the spatial position information corresponding to each target vehicle. The obtained space position information is transmittedAnd inputting the information into the G-LSTM to obtain the deeply-excavated mixed spatial position information. And simultaneously using the mixed spatial position information, the time information and the Gaussian noise as input information, inputting the input information into the D-LSTM to obtain different predicted tracks, and formulating a running strategy of the automatic driving vehicle according to the predicted tracks.
Based on the above description of the embodiments and the scenario examples, it can be seen that, after acquiring scenario information corresponding to a plurality of target vehicles, the method first extracts corresponding time information from the scenario information to determine the positions of the respective vehicles at each time. And correspondingly recalculating the spatial position information of each target vehicle based on the scene information and the time information corresponding to the target vehicles, determining the influence conditions among the vehicles based on the spatial position and the time information of the vehicles, and finally combining the influence conditions among the vehicles and the historical tracks of each target vehicle to obtain the predicted tracks corresponding to each target vehicle. By the method, when other target vehicles exist around the automatic driving vehicle, the influence relationship among the vehicles can be determined according to the positions of the target vehicles at different moments, so that the driving tracks of different target vehicles can be accurately and effectively predicted under the condition that the influence caused by the vehicles is considered, a subsequent automatic driving strategy can be effectively formulated according to the driving tracks of other vehicles, and the safe driving of the automatic driving vehicle is guaranteed.
Based on the vehicle track prediction method, the embodiment of the specification further provides a vehicle track prediction device. The vehicle trajectory prediction means may be provided to the vehicle trajectory prediction device. As shown in fig. 5, the vehicle trajectory prediction apparatus may include the following specific modules.
A scene information obtaining module 510, configured to obtain scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle.
A time information determining module 520 for determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time.
And a spatial position information calculation module 530 for calculating spatial position information of each target vehicle by combining the scene information and the time information.
A hybrid environment information constructing module 540, configured to construct hybrid environment information based on the spatial location information and the time information; the hybrid environment information is used to represent interactive influence conditions between target vehicles.
And a predicted trajectory obtaining module 550, configured to synthesize the mixed environment information and the historical trajectories of the target vehicles to obtain predicted trajectories corresponding to the target vehicles.
Based on the vehicle track prediction method, the embodiment of the specification further provides a vehicle track prediction device. As shown in fig. 6, the vehicle trajectory prediction apparatus may include a memory and a processor.
In this embodiment, the memory may be implemented in any suitable manner. For example, the memory may be a read-only memory, a mechanical hard disk, a solid state disk, a U disk, or the like. The memory may be used to store computer program instructions.
In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The processor may execute the computer program instructions to perform the steps of: acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle; determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time; calculating the spatial position information of each target vehicle by combining the scene information and the time information; constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles; and integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
This specification also provides one embodiment of a computer storage medium. The computer storage medium includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk (HDD), a Memory Card (Memory Card), and the like. The computer storage medium stores computer program instructions. When executed, implement the computer program instructions in the embodiments corresponding to fig. 1 of the present specification.
The embodiment of the specification also provides an automatic driving vehicle, and the automatic driving vehicle can be provided with the gear adjusting device corresponding to the gear adjusting device shown in fig. 7.
An example of a scenario of the automatic driving process is specifically described based on fig. 7. As shown in the drawing, the autonomous vehicle 710 detects surrounding vehicles during traveling, and recognizes a vehicle within a specified range as a target vehicle. As shown in the figure, the autonomous vehicle 710 may detect the state information of the target vehicle 730 by using its own sensing device after detecting the corresponding target vehicle 731, 732, and 733, respectively. For example, the speed of the target vehicle may be detected by a speed sensor, the vehicle state of the target vehicle 730 and the traffic light information may be detected by an image sensor, and after the information related to the vehicle is acquired, the autonomous vehicle 710 may transmit the information as scene information to the vehicle trajectory prediction device 720, so that the vehicle trajectory prediction device 720 may determine the influence relationship among the target vehicle 731, the target vehicle 732, and the target vehicle 733 based on the information, to determine the influence of different target vehicles on the following travel trajectory. Specifically, after determining the target predicted trajectory based on the method in the embodiment corresponding to fig. 1, the vehicle trajectory prediction device 720 feeds the target predicted trajectory back to other processing modules or cloud servers on the vehicle, so that the computing devices adjust the driving strategy of the autonomous vehicle 710 according to the target predicted trajectory of the target vehicle 730, for example, the driving speed of the autonomous vehicle 710 may be reduced to avoid the risk of rear-end collision with the target vehicle 730.
It should be noted that the vehicle trajectory prediction device may be a server located at the cloud end, and after the sensor arranged on the autonomous driving vehicle collects corresponding target vehicle information and environmental information, the sensor may directly send the collected information to the server located at the cloud end, and the server determines the target trajectory according to the information. The vehicle trajectory prediction device may be a calculation unit provided in an autonomous vehicle, and communicates with other modules in the autonomous vehicle by wired communication or wireless communication.
The above-mentioned scenes are only for better understanding of the trajectory prediction process in the actual automatic driving scene, and the prediction of the trajectory and the driving strategy of the vehicle itself may be changed based on the needs in the actual application to better conform to different application scenes, which is not described herein again.
While the process flows described above include operations that occur in a particular order, it should be appreciated that the processes may include more or less operations that are performed sequentially or in parallel (e.g., using parallel processors or a multi-threaded environment).
While the process flows described above include operations that occur in a particular order, it should be appreciated that the processes may include more or less operations that are performed sequentially or in parallel (e.g., using parallel processors or a multi-threaded environment).
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The embodiments of this specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The described embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A vehicle trajectory prediction method, characterized by comprising:
acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state and/or the running environment of the target vehicle;
determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time;
calculating the spatial position information of each target vehicle by combining the scene information and the time information;
constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles;
and integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
2. The method of claim 1, wherein prior to determining the temporal information corresponding to the scene information, further comprising:
preprocessing the scene information; the preprocessing comprises at least one of denoising processing and completion processing.
3. The method of claim 1, wherein the determining the time information corresponding to the scene information comprises:
extracting time information from the scene information by using a first neural network model; the first neural network model comprises a model constructed based on a long-short term memory network.
4. The method of claim 1, wherein said calculating spatial location information for each target vehicle in combination with said scene information and temporal information comprises:
acquiring an input matrix according to the scene information and the time information; the input matrix is used for representing the coordinates of each target vehicle at different moments;
determining the spatial position information of each target vehicle according to the position relation between the target vehicles at each moment in the input matrix; the spatial position information is used for representing the relative position relationship between different target vehicles.
5. The method of claim 1, wherein said calculating spatial location information for each target vehicle in combination with said scene information and temporal information comprises:
inputting the scene information and the time information into a position calculation model to obtain the spatial position information of each target vehicle; the position calculation model comprises a model constructed based on a graph attention network.
6. The method of claim 5, wherein the location calculation model comprises at least two aggregation nodes; inputting the scene information and the time information into a position calculation model to obtain the spatial position information of each target vehicle, wherein the method comprises the following steps:
respectively receiving node information transmitted by an initial node by using each aggregation node; the initial nodes are used for respectively corresponding to scene information of each target vehicle at the same moment;
determining influence coefficients of different initial nodes according to the node information;
and aggregating the node information based on the influence coefficient to obtain spatial position information.
7. The method of claim 1, wherein the constructing hybrid environment information based on the spatial location information and temporal information comprises:
inputting the spatial position information into a second neural network model to obtain mixed spatial information; the second neural network model comprises a model constructed based on a long-short term memory network; the second neural network model is used for mixing spatial position information;
and constructing mixed environment information by using the mixed spatial information and the time information.
8. The method of claim 1, wherein the integrating the hybrid environmental information and the historical trajectories of the respective target vehicles to obtain predicted trajectories for the respective target vehicles comprises:
inputting the mixed environment information, the time information and the historical tracks of the target vehicles into a third neural network model to obtain predicted tracks corresponding to the target vehicles; the third neural network model comprises a model constructed based on a long-short term memory network; and the third neural network model is used for predicting the running track of each vehicle according to the influence relation between the target vehicles and the historical track.
9. The method of claim 8, wherein inputting the hybrid environmental information, the time information, and the historical trajectory of each target vehicle into a third neural network model to obtain a predicted trajectory corresponding to each target vehicle comprises:
and inputting the Gaussian noise, the mixed environment information, the time information and the historical track of each target vehicle into a third neural network model to obtain a predicted track corresponding to each target vehicle.
10. A vehicle trajectory prediction device characterized by comprising:
the scene information acquisition module is used for acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle;
a time information determination module for determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time;
the spatial position information calculation module is used for calculating spatial position information of each target vehicle by combining the scene information and the time information;
the mixed environment information construction module is used for constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles;
and the predicted track acquisition module is used for integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
11. A vehicle trajectory prediction device comprising a memory and a processor;
the memory to store computer program instructions;
the processor to execute the computer program instructions to implement the steps of: acquiring scene information corresponding to at least two target vehicles respectively; the scene information is used for describing the running state of the target vehicle and/or the running environment of the target vehicle; determining time information corresponding to the scene information; the time information is used for representing the change situation of the scene information along with the time; calculating the spatial position information of each target vehicle by combining the scene information and the time information; constructing mixed environment information based on the spatial position information and the time information; the hybrid environment information is used for representing interaction influence conditions between target vehicles; and integrating the mixed environment information and the historical tracks of the target vehicles to obtain the predicted tracks corresponding to the target vehicles.
12. A computer storage medium having computer program instructions stored thereon, the computer program instructions, when executed, implementing the method of any of claims 1-9.
13. Autonomous vehicle, characterized in that it is equipped with a device according to claim 11.
CN202110666123.9A 2021-06-16 2021-06-16 Vehicle track prediction method, device, equipment and storage medium Pending CN113291321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110666123.9A CN113291321A (en) 2021-06-16 2021-06-16 Vehicle track prediction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110666123.9A CN113291321A (en) 2021-06-16 2021-06-16 Vehicle track prediction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113291321A true CN113291321A (en) 2021-08-24

Family

ID=77328463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110666123.9A Pending CN113291321A (en) 2021-06-16 2021-06-16 Vehicle track prediction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113291321A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113635912A (en) * 2021-09-10 2021-11-12 阿波罗智能技术(北京)有限公司 Vehicle control method, device, equipment, storage medium and automatic driving vehicle
CN113740837A (en) * 2021-09-01 2021-12-03 广州文远知行科技有限公司 Obstacle tracking method, device, equipment and storage medium
CN113954864A (en) * 2021-09-22 2022-01-21 江苏大学 Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information
CN114312816A (en) * 2022-01-04 2022-04-12 大陆投资(中国)有限公司 Man-machine interaction method and system for moving travel tool
CN114715145A (en) * 2022-04-29 2022-07-08 阿波罗智能技术(北京)有限公司 Trajectory prediction method, device and equipment and automatic driving vehicle
CN114872718A (en) * 2022-04-11 2022-08-09 清华大学 Vehicle trajectory prediction method, vehicle trajectory prediction device, computer equipment and storage medium
CN114913197A (en) * 2022-07-15 2022-08-16 小米汽车科技有限公司 Vehicle track prediction method and device, electronic equipment and storage medium
CN116001807A (en) * 2023-02-27 2023-04-25 安徽蔚来智驾科技有限公司 Multi-scene track prediction method, equipment, medium and vehicle
WO2024008086A1 (en) * 2022-07-06 2024-01-11 华为技术有限公司 Trajectory prediction method as well as apparatus therefor, medium, program product, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091708A (en) * 2019-12-13 2020-05-01 中国科学院深圳先进技术研究院 Vehicle track prediction method and device
US20200324794A1 (en) * 2020-06-25 2020-10-15 Intel Corporation Technology to apply driving norms for automated vehicle behavior prediction
CN111798492A (en) * 2020-07-16 2020-10-20 商汤国际私人有限公司 Trajectory prediction method, apparatus, electronic device, and medium
CN112348293A (en) * 2021-01-07 2021-02-09 北京三快在线科技有限公司 Method and device for predicting track of obstacle
CN112686281A (en) * 2020-12-08 2021-04-20 深圳先进技术研究院 Vehicle track prediction method based on space-time attention and multi-stage LSTM information expression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091708A (en) * 2019-12-13 2020-05-01 中国科学院深圳先进技术研究院 Vehicle track prediction method and device
US20200324794A1 (en) * 2020-06-25 2020-10-15 Intel Corporation Technology to apply driving norms for automated vehicle behavior prediction
CN111798492A (en) * 2020-07-16 2020-10-20 商汤国际私人有限公司 Trajectory prediction method, apparatus, electronic device, and medium
CN112686281A (en) * 2020-12-08 2021-04-20 深圳先进技术研究院 Vehicle track prediction method based on space-time attention and multi-stage LSTM information expression
CN112348293A (en) * 2021-01-07 2021-02-09 北京三快在线科技有限公司 Method and device for predicting track of obstacle

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113740837A (en) * 2021-09-01 2021-12-03 广州文远知行科技有限公司 Obstacle tracking method, device, equipment and storage medium
CN113635912A (en) * 2021-09-10 2021-11-12 阿波罗智能技术(北京)有限公司 Vehicle control method, device, equipment, storage medium and automatic driving vehicle
CN113954864A (en) * 2021-09-22 2022-01-21 江苏大学 Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information
CN113954864B (en) * 2021-09-22 2024-05-14 江苏大学 Intelligent automobile track prediction system and method integrating peripheral automobile interaction information
CN114312816A (en) * 2022-01-04 2022-04-12 大陆投资(中国)有限公司 Man-machine interaction method and system for moving travel tool
CN114872718A (en) * 2022-04-11 2022-08-09 清华大学 Vehicle trajectory prediction method, vehicle trajectory prediction device, computer equipment and storage medium
CN114715145A (en) * 2022-04-29 2022-07-08 阿波罗智能技术(北京)有限公司 Trajectory prediction method, device and equipment and automatic driving vehicle
WO2024008086A1 (en) * 2022-07-06 2024-01-11 华为技术有限公司 Trajectory prediction method as well as apparatus therefor, medium, program product, and electronic device
CN114913197A (en) * 2022-07-15 2022-08-16 小米汽车科技有限公司 Vehicle track prediction method and device, electronic equipment and storage medium
CN116001807A (en) * 2023-02-27 2023-04-25 安徽蔚来智驾科技有限公司 Multi-scene track prediction method, equipment, medium and vehicle
CN116001807B (en) * 2023-02-27 2023-07-07 安徽蔚来智驾科技有限公司 Multi-scene track prediction method, equipment, medium and vehicle

Similar Documents

Publication Publication Date Title
CN113291321A (en) Vehicle track prediction method, device, equipment and storage medium
EP4152204A1 (en) Lane line detection method, and related apparatus
CN110765894B (en) Target detection method, device, equipment and computer readable storage medium
CN110850854A (en) Autonomous driver agent and policy server for providing policies to autonomous driver agents
CN104021674B (en) A kind of quick and precisely prediction vehicle method by road trip time
CN114723955A (en) Image processing method, device, equipment and computer readable storage medium
US20210097266A1 (en) Disentangling human dynamics for pedestrian locomotion forecasting with noisy supervision
CN111860493A (en) Target detection method and device based on point cloud data
Wei et al. Survey of connected automated vehicle perception mode: from autonomy to interaction
CN111091023B (en) Vehicle detection method and device and electronic equipment
EP4102403A1 (en) Platform for perception system development for automated driving system
CN113291320A (en) Vehicle track prediction method, device, equipment and storage medium
CN114550449B (en) Vehicle track completion method and device, computer readable medium and electronic equipment
JP2019074849A (en) Drive data analyzer
CN114972911A (en) Method and equipment for collecting and processing output data of automatic driving perception algorithm model
CN116645645A (en) Coal Mine Transportation Safety Determination Method and Coal Mine Transportation Safety Determination System
CN114104005B (en) Decision-making method, device and equipment of automatic driving equipment and readable storage medium
CN116403174A (en) End-to-end automatic driving method, system, simulation system and storage medium
CN116206271A (en) Track prediction method, track prediction device and processor of intelligent body
CN117523914A (en) Collision early warning method, device, equipment, readable storage medium and program product
CN115269371A (en) Platform for path planning system development of an autonomous driving system
CN110333517B (en) Obstacle sensing method, obstacle sensing device and storage medium
CN114018265A (en) Patrol robot driving track generation method, equipment and medium
Fennessy Autonomous vehicle end-to-end reinforcement learning model and the effects of image segmentation on model quality
CN116597397B (en) Model training method and device for predicting vehicle track and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824

RJ01 Rejection of invention patent application after publication