WO2021190484A1 - 一种障碍物的轨迹预测方法及装置 - Google Patents

一种障碍物的轨迹预测方法及装置 Download PDF

Info

Publication number
WO2021190484A1
WO2021190484A1 PCT/CN2021/082310 CN2021082310W WO2021190484A1 WO 2021190484 A1 WO2021190484 A1 WO 2021190484A1 CN 2021082310 W CN2021082310 W CN 2021082310W WO 2021190484 A1 WO2021190484 A1 WO 2021190484A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
vehicle
state information
predicted
feature
Prior art date
Application number
PCT/CN2021/082310
Other languages
English (en)
French (fr)
Inventor
任冬淳
樊明宇
夏华夏
朱炎亮
钱德恒
李鑫
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Priority to US17/908,918 priority Critical patent/US20230100814A1/en
Priority to EP21777097.3A priority patent/EP4131062A4/en
Publication of WO2021190484A1 publication Critical patent/WO2021190484A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4044Direction of movement, e.g. backwards
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/10Historical data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present disclosure relates to the field of unmanned driving technology, and in particular to an obstacle trajectory prediction method and device.
  • Obstacles include static obstacles and dynamic obstacles. Since the static obstacle is still, it is easy for the vehicle to avoid the static obstacle. However, if the vehicle can accurately avoid dynamic obstacles, it is necessary to predict the future trajectory of the dynamic obstacle.
  • the embodiments of the present disclosure provide a method and device for predicting the trajectory of an obstacle, so as to partially solve the above-mentioned problems in the prior art.
  • An obstacle trajectory prediction method provided by the present disclosure includes:
  • the historical state information and current state information of the vehicle determine the current interaction feature of the current vehicle and the one or more obstacles;
  • the determined current interaction feature and the future motion trajectory feature of the vehicle determine a global interaction feature under the joint action of the vehicle and the one or more obstacles;
  • For the to-be-predicted obstacle among the one or more obstacles determine the individual interaction characteristics of the to-be-predicted obstacle in the joint action according to the current state information of the to-be-predicted obstacle and the global interaction characteristics;
  • the individual interaction characteristics of the obstacle to be predicted and the environmental information around the vehicle are input into a pre-trained trajectory prediction model, so that the trajectory prediction model outputs the future motion trajectory of the obstacle to be predicted.
  • the current interaction features include: determining the location features of the vehicle and each obstacle according to the current state information of the vehicle and the current state information of each obstacle; according to the historical state information and current state information of the vehicle, And the historical state information and current state information of each obstacle, obtaining the hidden variables corresponding to the vehicle and each obstacle, and according to the hidden variables corresponding to the vehicle and each obstacle respectively , Determine the tracking characteristics of the vehicle and each obstacle, wherein the hidden variable is used to characterize the state difference of the vehicle or each obstacle from the historical state to the current state; according to the historical state of the vehicle Information and current state information to determine the motion characteristics of the vehicle; according to the location characteristics, the tracking characteristics, and the motion characteristics of the vehicle, determine the current situation of the vehicle and the one or more obstacles Current interaction characteristics.
  • determining the individual interaction characteristics of the obstacle to be predicted in the joint action includes: determining the current state information of the obstacle to be predicted The corresponding feature vector is used as the current state vector of the obstacle to be predicted; the feature vector corresponding to the global interaction feature is determined as the global interaction vector; according to the difference between the current state vector of the obstacle to be predicted and the global interaction vector The vector dot product is used to determine the individual interaction characteristics of the obstacle to be predicted in the joint action.
  • inputting environmental information around the vehicle into a pre-trained trajectory prediction model includes: collecting actual images of the current environment around the vehicle; determining global environmental characteristics according to the actual images; The location in the actual image is determined as the reference location of the local environment feature corresponding to the obstacle to be predicted in the global environment feature; the environment feature corresponding to the reference location in the global environment feature is determined as The local environment feature corresponding to the obstacle to be predicted; and the determined local environment feature corresponding to the obstacle to be predicted is input into the pre-trained trajectory prediction model.
  • determining the global environmental characteristics according to the actual image includes: identifying each key element contained in the actual image; determining the position of each key element in the actual image; and determining the position of each key element in the actual image; The position in the actual image and the preset model respectively matched with each key element are used to generate an abstract image corresponding to the actual image; and the global environmental feature is determined according to the abstract image.
  • the pre-trained trajectory prediction model is a long and short-term memory model LSTM including an encoding end and a decoding end; the individual interaction characteristics of the obstacle to be predicted and the environmental information around the vehicle are input into the pre-trained trajectory
  • the prediction model, so that the trajectory prediction model outputs the future motion trajectory of the obstacle to be predicted includes: determining the historical state of the obstacle to be predicted from the historical state to the current state according to the historical state information and current state information of the obstacle to be predicted State difference; the individual interaction characteristics of the obstacle to be predicted, the vehicle surrounding environment information, and the state difference of the obstacle to be predicted are input to the encoding terminal, so that the encoding terminal outputs the corresponding to the obstacle to be predicted Latent variables; the latent variables corresponding to the obstacle to be predicted, the individual interaction characteristics of the obstacle to be predicted, the environment information around the vehicle and the state difference of the obstacle to be predicted are input to the decoding terminal to The decoding terminal outputs the future motion trajectory of the obstacle to be predicted.
  • obtaining the hidden variable corresponding to the vehicle according to the historical state information and the current state information of the vehicle includes: determining the vehicle from the historical state to the current state according to the historical state information and the current state information of the vehicle The state difference of the vehicle; the individual interaction characteristics of the vehicle, the environmental information around the vehicle, and the state difference of the vehicle are input to the encoding terminal, so that the encoding terminal outputs the vehicle corresponding to the Hidden variables; wherein the individual interaction characteristics of the vehicle are obtained according to the current state information of the vehicle and the global interaction characteristics; according to the historical state information and current state information of each obstacle, each obstacle is obtained separately
  • the corresponding hidden variables include: for each obstacle, according to the historical state information and current state information of the obstacle, determine the state difference of the obstacle from the historical state to the current state; The environment information around the vehicle and the state difference of the obstacle from the historical state to the current state are input to the encoding terminal, so that the encoding terminal outputs the hidden variable corresponding to the obstacle.
  • An obstacle trajectory prediction device provided by the present disclosure includes:
  • Monitoring module used to monitor one or more obstacles around the vehicle
  • the obtaining module is used to obtain the historical state information and current state information of the obstacle for each obstacle;
  • the current interaction feature determination module is used to determine that the current vehicle is shared with the one or more obstacles according to the historical state information and current state information of the vehicle, and the historical state information and current state information of each obstacle Current interactive features under the action;
  • a future motion trajectory feature determination module configured to obtain the future motion trajectory planned by the vehicle itself, and determine the future motion trajectory feature of the vehicle according to the future motion trajectory;
  • a global interaction feature determination module configured to determine the global interaction feature of the vehicle and the one or more obstacles based on the determined current interaction feature and the future motion trajectory feature of the vehicle;
  • the individual interaction feature determination module is used for determining the obstacle to be predicted in the one or more obstacles based on the current state information of the obstacle to be predicted and the global interaction feature. Individual interaction characteristics in the interaction;
  • the prediction module is used to input the individual interaction characteristics of the obstacle to be predicted and the environmental information around the vehicle into a pre-trained trajectory prediction model, so that the trajectory prediction model outputs the future motion trajectory of the obstacle to be predicted.
  • the present disclosure provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the above-mentioned obstacle trajectory prediction method is realized.
  • An unmanned driving device includes a memory, a processor, and a computer program that is stored on the memory and can run on the processor, and the processor implements the above-mentioned obstacle trajectory prediction method when the processor executes the program.
  • FIG. 1 is a schematic diagram of a system architecture of an obstacle trajectory prediction method provided by an embodiment of the disclosure
  • FIG. 2 is a schematic flowchart of an obstacle trajectory prediction method provided by an embodiment of the disclosure
  • FIG. 3 is a schematic structural diagram of an obstacle trajectory prediction device provided by an embodiment of the disclosure.
  • FIG. 4 is a schematic structural diagram of an unmanned driving device provided by an embodiment of the disclosure.
  • the current interaction feature characterizing the current interaction information under the joint action of the vehicle and each obstacle is determined, and the current interaction feature and the vehicle's own planning are determined.
  • the future motion trajectory of the vehicle determines the global interaction characteristics (characterizing future interaction information) under the joint action of the vehicle and each obstacle, and predicts the future motion trajectory based on the global interaction characteristics. Since the future motion trajectory planned by the vehicle itself is known, the known future motion trajectory can be used as a prior knowledge to construct future interactive information, and the reliability of the future interactive information is relatively high. Compared with the use of current interactive information, it is more accurate to predict the future motion trajectory of obstacles through future interactive information.
  • the system architecture shown in FIG. 1 can be used to predict the trajectory of the obstacle, and the system architecture mainly includes two parts: an interactive network and a prediction network.
  • the interactive network is used to determine the movement characteristics of the vehicle based on the vehicle's historical state information and current state information for the vehicle and each obstacle; determine the location of the vehicle and each obstacle based on the current state information of the vehicle and the current state information of each obstacle Features: According to the historical and current state information of the vehicle and each obstacle, determine the hidden variable corresponding to the vehicle and each obstacle, and determine the tracking of the vehicle and each obstacle according to the hidden variables corresponding to the vehicle and each obstacle Features; Determine the current interaction features according to the vehicle's motion characteristics, the location characteristics of the vehicle and each obstacle, and the tracking characteristics of the vehicle and each obstacle; determine the future motion trajectory characteristics of the vehicle through the future motion trajectory planned by the vehicle itself; and according to The current interaction feature and the future motion trajectory feature of the vehicle determine the global interaction feature between the vehicle and each obstacle.
  • the prediction network is used to predict the future motion trajectory of the obstacle to be predicted in each obstacle: first, the individual interaction feature is determined according to the current state information of the obstacle to be predicted and the global interaction feature; The determined individual interaction characteristics and environmental information around the vehicle are input to the trajectory planning model, so that the trajectory planning model outputs the future motion trajectory of the obstacle to be predicted.
  • the system framework can be implemented on the computer of the vehicle, and the central processing unit (CPU) of the computer executes the corresponding program stored in the memory.
  • the system framework can be implemented on a terminal that can interact with the vehicle, such as smart phones, smart watches, notebook computers, special computers and other devices.
  • the system framework may be implemented on cloud devices that can interact with the vehicle, such as servers, cloud processors and other devices. This application does not restrict this. For the sake of simplicity, the implementation of the system framework on the vehicle's computer will be used as an example for description.
  • FIG. 2 is a schematic flowchart of an obstacle trajectory prediction method provided by an embodiment of the disclosure, and the schematic flowchart includes:
  • S100 Monitor one or more obstacles around the vehicle.
  • the status information may be: the coordinates (x, y) where the obstacle is located, the speed (v) of the obstacle, the acceleration (a) of the obstacle, and so on.
  • the number and types of obstacles that interact with the vehicle around the vehicle are dynamically changing, that is, there are three obstacles a, b, and c that interact with the vehicle in a certain period of time, then the other The obstacles that interact with the vehicle during the time period may become four, a, c, d, and e. Therefore, the vehicle needs to monitor the obstacles that interact with it in real time and update the collected data in time.
  • S102 For each obstacle, obtain historical state information and current state information of the obstacle.
  • the state information of the obstacle can be collected by the equipment on the vehicle that interacts with the obstacle, for example, by the camera, radar and other equipment installed on the vehicle, collected by the sensor installed on the obstacle itself, and sent to interact with it through the network , Or through the cloud device based on the location of the obstacle to determine and send to the vehicle through the network.
  • the current status information can be the status information of the obstacle at the current time
  • the historical status information can be the status information of the obstacle at the current time and the previous time, or the obstacle at the current time. Status information at multiple historical moments within. Both current status information and historical status information are known information.
  • S104 According to the historical state information and current state information of the vehicle, and the historical state information and current state information of each obstacle, determine the current interaction feature of the current vehicle and one or more obstacles.
  • the current interaction feature represents the interaction between the vehicle and each obstacle at the current moment.
  • S106 Obtain the future motion trajectory planned by the vehicle itself, and determine the future motion trajectory feature of the vehicle according to the future motion trajectory.
  • the future motion trajectory may be a motion trajectory from the current moment to the next moment, or a section of motion trajectory formed by motion trajectories from the current moment to multiple moments in the future.
  • the current moment is defined as t
  • the future moments are respectively defined as t+1, t+2, and t +3, define the position coordinates of the vehicle at time t+1, t+2, and t+3 as P t+1 ego , P t+2 ego , and P t+3 ego, respectively .
  • the feature vectors can be extracted from P t+1 ego , P t+2 ego , and P t+3 ego respectively, and the extracted feature vectors are spliced together, and the spliced Each feature vector is subjected to maximum pooling processing to obtain the future motion trajectory characteristics of the vehicle.
  • the embedding method may be used to extract the feature vector at a certain moment.
  • Embedding uses a low-dimensional dense vector to represent an object.
  • the Embedding vector can express some characteristics of the corresponding object, and the distance between the vectors reflects the similarity between the objects.
  • S108 According to the determined current interaction feature and the future motion trajectory feature of the vehicle, determine the global interaction feature under the joint action of the vehicle and one or more obstacles.
  • the global interaction feature By splicing the feature vector corresponding to the current interaction feature with the feature vector corresponding to the future motion trajectory feature of the vehicle, the global interaction feature under the joint action of the vehicle and each obstacle can be obtained.
  • the current interaction feature combines historical state information and current state information, and is used to characterize the interaction between the vehicle and each obstacle at the current moment.
  • the global interaction feature adds the future motion trajectory of the vehicle itself based on the current interaction feature, and combines current state information and predicted future state information, which can characterize the interaction between the vehicle and various obstacles in the future to a certain extent.
  • S110 For the obstacle to be predicted in each obstacle, determine the individual interaction characteristics of the obstacle to be predicted in the joint action according to the current state information of the obstacle to be predicted and the determined global interaction characteristics.
  • the obstacle to be predicted is any obstacle among the obstacles.
  • the embodiments of the present disclosure determine the individual interaction characteristics of the obstacle to be predicted in the joint action of the vehicle and each obstacle, and the individual interaction characteristics are part of the global interaction characteristics. To a certain extent, it can characterize the interactive information around the obstacle to be predicted in the future, and the interactive information contains the future state information of the obstacle to be predicted.
  • S112 Input the individual interaction characteristics of the obstacle to be predicted and the environmental information around the vehicle into a pre-trained trajectory prediction model, so that the trajectory prediction model outputs the future motion trajectory of the obstacle to be predicted.
  • the environmental information of the obstacle to be predicted is not easy to obtain, but the environmental information around the vehicle is easy to obtain.
  • the environmental information of the obstacle to be predicted may be characterized by the environmental information around the vehicle that interacts with the obstacle to be predicted.
  • the current state information of the obstacle to be predicted, the individual interaction characteristics of the obstacle to be predicted, and the environmental information around the vehicle can also be input into a pre-trained trajectory prediction model, so that the trajectory prediction model outputs the to be predicted The future trajectory of the obstacle.
  • the state information of the next moment can be predicted based on the state information at the current moment; it can also be based on the state information of the current time period (including multiple moments).
  • Predict the state information that is, a section of motion trajectory in the future time period (including multiple moments).
  • the location characteristics of the vehicle and each obstacle can be determined according to the current state information of the vehicle and the current state information of each obstacle.
  • the feature vector can be extracted according to the current state information of the vehicle.
  • the current state information of the vehicle can be characterized by the position coordinate of the vehicle at the current time t, and the position coordinate is defined as P t ego .
  • the feature vector can also be extracted according to the current state information of each obstacle, where the current state information of each obstacle can also be characterized by the position coordinates of the obstacle.
  • Each obstacle can be represented by 1 , 2 , 3...n, and the position coordinates of each obstacle are defined as P t 1, P t 2, P t 3 , ...P t n in turn .
  • the feature vector corresponding to the current state information of the vehicle and each obstacle can be spliced, that is, the feature vector corresponding to P t ego , P t 1 , P t 2 , P t 3 ... P t n will be spliced
  • the latter feature vector is subjected to maximum pooling processing to obtain the location features of the vehicle and each obstacle.
  • the Embedding method may be used to obtain the feature vector corresponding to each state information.
  • the weight matrix used may be different.
  • the hidden variables corresponding to the vehicle and each obstacle can be obtained according to the historical state information and current state information of the vehicle, and the historical state information and current state information of each obstacle;
  • the hidden variables corresponding to the obstacles respectively determine the tracking characteristics of the vehicle and each obstacle.
  • the hidden variable is used to characterize the state difference of the vehicle or each obstacle from the historical state to the current state. Since vehicles and different obstacles have different state differences from the historical state to the current state, hidden variables can characterize the tracking information of the vehicle and each obstacle to a certain extent. As mentioned above, the number of obstacles that interact with the vehicle around it is dynamically changing. Therefore, if the obstacles are tracked by numbering, it will undoubtedly take time and effort and the tracking effect will be poor.
  • the method for determining tracking features is similar to the method for determining location features above, that is, first extract feature vectors for each hidden variable, stitch each extracted feature vector, and perform maximum pooling processing on the stitched feature vector. Obtain the tracking feature, which will not be repeated here.
  • the model in the predictive network can be used to obtain each hidden variable.
  • the above-mentioned position coordinate information is determined using the world coordinate system, so the embodiments of the present disclosure are suitable for scenes of the world coordinate system.
  • the embodiments of the present disclosure can also be applied to a vehicle coordinate system (that is, a coordinate system is established with the vehicle itself as the center).
  • vehicle coordinate system that is, a coordinate system is established with the vehicle itself as the center.
  • the vehicle's movement characteristics can be determined according to the vehicle's historical state information and current state information.
  • the movement characteristics of the vehicle characterize the state difference of the vehicle from the historical state to the current state.
  • the position coordinate information in the vehicle coordinate system can be determined by referring to the movement characteristics of the vehicle.
  • the location feature, tracking feature, and vehicle motion feature can be input to the Gated Recurrent Unit (GRU) to further extract features from The feature vector is further extracted from the extracted features, and the current interaction feature under the joint action of the current vehicle and various obstacles is finally obtained.
  • GRU Gated Recurrent Unit
  • the GRU model can also be replaced with a Long Short-Term Memory (LSTM) to extract features, and other models can also be used, which is not limited in the embodiments of the present disclosure.
  • LSTM Long Short-Term Memory
  • the actual image of the current environment around the vehicle can be collected, and the global environment characteristics can be determined according to the actual image; according to the position of the obstacle to be predicted in the actual image, the local corresponding to the obstacle to be predicted can be determined
  • the location of the environment feature in the global environment feature is used as a reference location; the environment feature corresponding to the reference location in the global environment feature is determined as the local environment feature corresponding to the obstacle to be predicted.
  • Inputting the determined local environmental characteristics corresponding to the obstacle to be predicted into the pre-trained trajectory prediction model can further improve the accuracy of the trajectory prediction.
  • the above-mentioned method of determining local environmental characteristics can be specifically implemented by ROI (Region of Interest) Align technology.
  • ROI Region of Interest
  • the actual image can also be converted into an abstract image, removing some irrelevant elements in the actual image, such as surrounding trees, houses, etc., and only retaining key elements, such as road maps, traffic routes, traffic lights and other information. Simplified information can improve forecasting efficiency.
  • the method of converting the actual image into an abstract image may include: identifying each key element contained in the actual image, determining the position of each key element in the actual image; for each key element, according to the key element in the actual image The location of the location and the preset model that matches the key element to generate an abstract image corresponding to the actual image.
  • the generated abstract image is input to the pre-trained environment model, so that the environment model outputs global environment features according to the abstract image.
  • the individual interaction characteristics and environmental information can be input to the pre-trained trajectory prediction model, so that the trajectory prediction model outputs the obstacle to be predicted The future trajectory of objects.
  • the pre-trained trajectory prediction model may be an LSTM model including an encoding end and a decoding end. According to the historical state information and current state information of the obstacle to be predicted, the state difference of the obstacle to be predicted from the historical state to the current state can be determined.
  • the individual interaction characteristics of the obstacle to be predicted, the environment information around the vehicle, and the state difference of the obstacle to be predicted from the historical state to the current state are input to the encoding terminal, so that the encoding terminal outputs the latent variable corresponding to the obstacle to be predicted.
  • the hidden variables corresponding to the vehicle may be obtained according to the historical state information and current state information of the vehicle. Specifically, according to the historical state information and current state information of the vehicle itself, the state difference of the vehicle from the historical state to the current state can be determined; the individual interaction characteristics of the vehicle, the environmental information around the vehicle, and the state of the vehicle from the historical state to the current state can be determined The difference is input to the encoding terminal so that the encoding terminal outputs the hidden variables corresponding to the vehicle; wherein, the individual interaction characteristics of the vehicle are obtained according to the current state information of the vehicle and the global interaction characteristics.
  • the hidden variables corresponding to each obstacle can be obtained according to the historical state information and current state information of each obstacle.
  • the state difference of the obstacle from the historical state to the current state can be determined according to the historical state information and current state information of the obstacle; the individual interaction characteristics of the obstacle and the environment around the vehicle The information and the state difference of the obstacle from the historical state to the current state are input to the encoding end, so that the encoding end outputs the hidden variable corresponding to the obstacle; wherein, the individual interaction characteristics of the obstacle are based on the current state information of the obstacle and Global interactive features are obtained.
  • the trajectory prediction model may be an LSTM model
  • the environment model may be a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • the trajectory prediction model and the environment model may also adopt other models, which are not limited in the embodiment of the present disclosure.
  • the current interaction characteristics between the vehicle and the obstacles are determined through the historical state information and current state information of the vehicle and each obstacle (the current interaction information is characterized by the history and the current).
  • the future motion trajectory planned by the vehicle itself is added as a priori knowledge to obtain the global interaction characteristics (characterizing future interaction information through current and future).
  • Determine individual interaction features that is, a part of the global interaction feature that characterizes the future interaction information around the obstacle to be predicted
  • the embodiments of the present disclosure use global interaction features to characterize the future interaction information between the vehicle and each obstacle.
  • the influence of the motion trajectory also refers to the future motion trajectory planned by the vehicle itself. Since the future trajectory planned by the vehicle itself is known, it can be used as a priori knowledge in this disclosure, which can characterize the future interaction between the vehicle and various obstacles to a certain extent. In this way, the predicted future motion trajectory is closer to the actual trajectory. When in an environment with more complicated traffic conditions, the future trajectory of obstacles can also be predicted more accurately.
  • the obstacle trajectory prediction method provided by the embodiment of the present disclosure can predict how the obstacle will travel in the future, which is convenient for the vehicle to avoid the obstacle accurately.
  • the method can also provide a correction reference for the vehicle's own planning path, that is, the future motion trajectory planned by the vehicle itself is used as the prior knowledge, and the prior knowledge is used to assist obstacles in predicting the future motion trajectory, and then the future motion of each obstacle is passed.
  • the trajectory modifies the future motion trajectory (that is, prior knowledge) planned by the vehicle itself, so that the vehicle's own path planning is more accurate.
  • the trajectory prediction method can also be applied to other fields, which is not limited in the embodiment of the present disclosure.
  • the aforementioned obstacle trajectory prediction method provided by the present disclosure can be specifically used for path planning of unmanned vehicles or obstacle avoidance of unmanned vehicles.
  • the unmanned vehicle can be an unmanned delivery vehicle, and the unmanned delivery vehicle can be applied to the field of delivery using an unmanned delivery vehicle, for example, an unmanned delivery vehicle is used for express delivery, takeaway and other delivery scenarios.
  • an autonomous driving fleet composed of multiple unmanned delivery vehicles can be used for delivery.
  • the method can be applied to, for example, the autonomous driving device of the above-mentioned unmanned vehicle, or applied to a server or cloud computing device that communicates with the autonomous driving device.
  • the present disclosure also provides corresponding devices, storage media, and unmanned driving equipment.
  • FIG. 3 is a schematic structural diagram of an obstacle trajectory prediction device provided by an embodiment of the disclosure, and the device includes:
  • the monitoring module 200 is used to monitor various obstacles around the vehicle;
  • the obtaining module 202 is configured to obtain historical state information and current state information of the obstacle for each obstacle;
  • the current interaction feature determination module 204 is configured to determine the current vehicle and the one or more obstacles according to the historical state information and current state information of the vehicle, and the historical state information and current state information of each obstacle Current interactive features under the combined effect;
  • the future motion trajectory feature determination module 206 is configured to obtain the future motion trajectory planned by the vehicle itself, and determine the future motion trajectory feature of the vehicle according to the future motion trajectory;
  • the global interaction feature determination module 208 is configured to determine the global interaction feature of the vehicle and the one or more obstacles based on the determined current interaction feature and the future motion trajectory feature of the vehicle;
  • the individual interaction feature determination module 210 is configured to determine the location of the obstacle to be predicted based on the current state information of the obstacle to be predicted and the global interaction feature for the obstacle to be predicted among the one or more obstacles. Describe the individual interaction characteristics in the interaction;
  • the prediction module 212 is configured to input the individual interaction characteristics of the obstacle to be predicted and the environmental information around the vehicle into a pre-trained trajectory prediction model, so that the trajectory prediction model outputs the future motion trajectory of the obstacle to be predicted.
  • the current interaction feature determination module 204 is configured to determine the location features of the vehicle and each obstacle according to the current state information of the vehicle and the current state information of each obstacle;
  • the historical state information and current state information of each obstacle, as well as the historical state information and current state information of each obstacle obtain the hidden variables corresponding to the vehicle and each obstacle, and according to the vehicle and the The hidden variable corresponding to each obstacle determines the tracking characteristics of the vehicle and each obstacle, wherein the hidden variable is used to characterize the state of the vehicle or each obstacle from the historical state to the current state Difference; according to the historical state information and current state information of the vehicle, determine the movement characteristics of the vehicle; according to the location characteristics, the tracking characteristics, the movement characteristics of the vehicle, determine the current vehicle and the one Or the current interactive feature under the joint action of multiple obstacles.
  • the individual interaction feature determination module 210 is configured to determine a feature vector corresponding to the current state information of the obstacle to be predicted, as the current state vector of the obstacle to be predicted; determine that it corresponds to the global interaction feature
  • the feature vector of is used as a global interaction vector; according to the vector dot product of the current state vector of the obstacle to be predicted and the global interaction vector, the individual interaction feature of the obstacle to be predicted in the joint action is determined.
  • the prediction module 212 is configured to collect an actual image of the current environment around the vehicle; determine the global environment feature according to the actual image; determine the position of the obstacle to be predicted in the actual image
  • the location of the local environment feature corresponding to the obstacle to be predicted in the global environment feature is used as a reference location; the environment feature corresponding to the reference location in the global environment feature is determined as the local environment corresponding to the obstacle to be predicted Features; input the determined local environment features corresponding to the obstacle to be predicted into the pre-trained trajectory prediction model.
  • the prediction module 212 is also used to identify each key element contained in the actual image; determine the position of each key element in the actual image; according to each key element in the actual image The location, and the preset model respectively matched with each key element, generate an abstract image corresponding to the actual image; and determine the global environmental feature according to the abstract image.
  • the pre-trained trajectory prediction model is an LSTM model including an encoding end and a decoding end.
  • the prediction module 212 is also used to determine the state difference of the obstacle to be predicted from the historical state to the current state according to the historical state information and current state information of the obstacle to be predicted; the individual interaction characteristics of the obstacle to be predicted , The environment information around the vehicle and the state difference of the obstacle to be predicted from the historical state to the current state are input to the encoding terminal, so that the encoding terminal outputs the latent variable corresponding to the obstacle to be predicted;
  • the hidden variables corresponding to the obstacle, the individual interaction characteristics of the obstacle to be predicted, the environmental information around the vehicle, and the state difference of the obstacle to be predicted from the historical state to the current state are input to the decoding terminal, so that the decoding The terminal outputs the future motion trajectory of the obstacle to be predicted.
  • the current interaction feature determination module 204 is further configured to determine the state difference of the vehicle from the historical state to the current state according to the historical state information and the current state information of the vehicle; The characteristics, the environmental information around the vehicle, and the state difference of the vehicle from the historical state to the current state are input to the encoding terminal, so that the encoding terminal outputs the hidden variable corresponding to the vehicle; wherein The individual interaction feature is obtained according to the current state information of the vehicle and the global interaction feature.
  • the current interaction feature determining module 204 is further configured to determine, for each obstacle, the state difference of the obstacle from the historical state to the current state according to the historical state information and current state information of the obstacle;
  • the individual interaction characteristics of the obstacle, the environmental information around the vehicle, and the state difference of the obstacle from the historical state to the current state are input to the encoding terminal, so that the encoding terminal outputs the hidden variable corresponding to the obstacle;
  • the individual interaction feature of the obstacle is obtained according to the current state information of the obstacle and the global interaction feature.
  • the present disclosure also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a controller, it can prompt the controller to implement the obstacle trajectory shown in FIG. 2 above. method of prediction.
  • the embodiment of the present disclosure also provides a schematic structural diagram of the unmanned driving device shown in FIG. 4.
  • the driverless device includes a processor, internal bus, network interface, memory, and non-volatile memory, and may also include hardware required for other services.
  • the processor reads the corresponding instruction from the non-volatile memory to the memory and then runs, so as to implement the obstacle trajectory prediction method described in FIG. 2 above.
  • the present disclosure does not exclude other implementations, such as logic devices or a combination of software and hardware, etc. That is to say, the execution body of the following processing flow is not limited to each logic unit, and can also be hardware or Logic device.
  • the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
  • hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
  • software improvements improvements in method flow.
  • the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
  • a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Java Hardware Description Language Lava, Lola, MyHDL, PALASM
  • RHDL Ruby Hardware Description Language
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the memory control logic.
  • the method steps can be logically programmed to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcomputers.
  • the same function can be realized in the form of a controller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • the embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present disclosure may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can direct a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram to implement the functions specified in the block or multiple blocks.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in one or more processes in the flowchart and/or one or more blocks in the block diagram.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in a computer-readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM).
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cartridge Tape, magnetic tape, disk storage or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • program modules can also be practiced in distributed computing environments in which tasks are performed by remote processing devices connected through a communication network.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

本文公开了一种障碍物的轨迹预测方法及装置。本公开实施例根据车辆及各障碍物的历史状态信息和当前状态信息,以及车辆自身规划的未来运动轨迹,确定车辆与各障碍物共同作用下的全局交互特征;并根据全局交互特征和待预测障碍物的当前状态信息,确定待预测障碍物的个体交互特征;通过个体交互特征和车辆周围的环境信息,对待预测障碍物的未来运动轨迹进行预测。

Description

一种障碍物的轨迹预测方法及装置 技术领域
本公开涉及无人驾驶技术领域,尤其涉及一种障碍物的轨迹预测方法及装置。
背景技术
目前,车辆的智能化作为人工智能技术的重要组成部分,在社会生产、生活中的作用日益凸显,成为引导交通技术发展的主要方向之一。
在对无人车和/或具有辅助驾驶功能的车辆(以下统称“车辆”)进行路径规划时,为了使车辆安全行驶,需要车辆避开周围的障碍物。障碍物包括静态障碍物和动态障碍物。由于静态障碍物静止不动,使车辆避开静态障碍物很容易。但是,若使车辆准确地避开动态障碍物,则需要对动态障碍物未来的行驶轨迹进行预测。
发明内容
本公开实施例提供一种障碍物的轨迹预测方法及装置,以部分解决上述现有技术存在的问题。
本公开提供的一种障碍物的轨迹预测方法,包括:
监测车辆周围的一个或多个障碍物;
针对每个障碍物,获取该障碍物的历史状态信息和当前状态信息;
根据所述车辆的历史状态信息和当前状态信息,以及每个障碍物的历史状态信息和当前状态信息,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征;
获取所述车辆自身规划的未来运动轨迹,并根据所述未来运动轨迹确定所述车辆的未来运动轨迹特征;
根据所确定的所述当前交互特征和所述车辆的未来运动轨迹特征,确定所述车辆与所述一个或多个障碍物共同作用下的全局交互特征;
针对所述一个或多个障碍物中的待预测障碍物,根据该待预测障碍物的当前状态信 息和所述全局交互特征,确定该待预测障碍物在所述共同作用中的个体交互特征;
将该待预测障碍物的个体交互特征和所述车辆周围的环境信息输入预先训练的轨迹预测模型,以使所述轨迹预测模型输出该待预测障碍物的未来运动轨迹。
可选的,根据所述车辆的历史状态信息和当前状态信息,以及所述每个障碍物的历史状态信息和当前状态信息,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征包括:根据所述车辆的当前状态信息以及每个障碍物的当前状态信息,确定所述车辆与每个障碍物的位置特征;根据所述车辆的历史状态信息和当前状态信息,以及所述每个障碍物的历史状态信息和当前状态信息,获取所述车辆和所述每个障碍物分别对应的隐变量,并根据所述车辆和所述每个障碍物分别对应的隐变量,确定所述车辆与每个障碍物的跟踪特征,其中,所述隐变量用于表征所述车辆或所述每个障碍物从历史状态到当前状态的状态差异;根据所述车辆的历史状态信息和当前状态信息,确定所述车辆的运动特征;根据所述位置特征、所述跟踪特征、所述车辆的运动特征,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征。
可选的,根据该待预测障碍物的当前状态信息和所述全局交互特征,确定该待预测障碍物在所述共同作用中的个体交互特征包括:确定与该待预测障碍物的当前状态信息对应的特征向量,作为该待预测障碍物的当前状态向量;确定与所述全局交互特征对应的特征向量,作为全局交互向量;根据该待预测障碍物的当前状态向量与所述全局交互向量的向量点乘,确定该待预测障碍物在所述共同作用中的所述个体交互特征。
可选的,将所述车辆周围的环境信息输入预先训练的轨迹预测模型包括:采集所述车辆周围当前环境的实际图像;根据所述实际图像,确定全局环境特征;根据该待预测障碍物在所述实际图像中的位置,确定该待预测障碍物对应的局部环境特征在所述全局环境特征中的位置,作为参考位置;确定所述全局环境特征中所述参考位置对应的环境特征,作为该待预测障碍物对应的局部环境特征;将所确定的该待预测障碍物对应的局部环境特征输入到所述预先训练的轨迹预测模型。
可选的,根据所述实际图像,确定全局环境特征包括:识别所述实际图像中包含的各关键要素;确定各关键要素在所述实际图像中所处的位置;根据各关键要素在所述实际图像中所处的位置,以及与各关键要素分别匹配的预设模型,生成所述实际图像对应的抽象图像;根据所述抽象图像,确定所述全局环境特征。
可选的,所述预先训练的轨迹预测模型为包含编码端和解码端的长短期记忆模型 LSTM;将该待预测障碍物的个体交互特征和所述车辆周围的环境信息输入所述预先训练的轨迹预测模型,以使所述轨迹预测模型输出该待预测障碍物的未来运动轨迹包括:根据该待预测障碍物的历史状态信息和当前状态信息,确定该待预测障碍物从历史状态到当前状态的状态差异;将该待预测障碍物的个体交互特征、所述车辆周围环境信息和该待预测障碍物的所述状态差异输入所述编码端,以使所述编码端输出该待预测障碍物对应的隐变量;将该待预测障碍物对应的隐变量、该待预测障碍物的个体交互特征、所述车辆周围的环境信息和该待预测障碍物的所述状态差异输入所述解码端,以使所述解码端输出该待预测障碍物的未来运动轨迹。
可选的,根据所述车辆的历史状态信息和当前状态信息,获取所述车辆对应的隐变量包括:根据所述车辆的历史状态信息和当前状态信息,确定所述车辆从历史状态到当前状态的状态差异;将所述车辆的个体交互特征、所述车辆周围的环境信息、所述车辆的所述状态差异输入到所述编码端,以使所述编码端输出所述车辆对应的所述隐变量;其中,所述车辆的个体交互特征是根据所述车辆的当前状态信息和所述全局交互特征获得的;根据每个障碍物的历史状态信息和当前状态信息,获取每个障碍物分别对应的隐变量包括:针对每个障碍物,根据该障碍物的历史状态信息和当前状态信息,确定该障碍物从历史状态到当前状态的状态差异;将该障碍物的个体交互特征、所述车辆周围的环境信息、该障碍物从历史状态到当前状态的状态差异输入到所述编码端,以使所述编码端输出该障碍物对应的隐变量。
本公开提供的一种障碍物的轨迹预测装置,包括:
监测模块,用于监测车辆周围的一个或多个障碍物;
获取模块,用于针对每个障碍物,获取该障碍物的历史状态信息和当前状态信息;
当前交互特征确定模块,用于根据所述车辆的历史状态信息和当前状态信息,以及每个障碍物的历史状态信息和当前状态信息,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征;
未来运动轨迹特征确定模块,用于获取所述车辆自身规划的未来运动轨迹,并根据所述未来运动轨迹确定所述车辆的未来运动轨迹特征;
全局交互特征确定模块,用于根据所确定的所述当前交互特征和所述车辆的未来运动轨迹特征,确定所述车辆与所述一个或多个障碍物共同作用下的全局交互特征;
个体交互特征确定模块,用于针对所述一个或多个障碍物中的待预测障碍物,根据 该待预测障碍物的当前状态信息和所述全局交互特征,确定该待预测障碍物在所述共同作用中的个体交互特征;
预测模块,用于将该待预测障碍物的个体交互特征和所述车辆周围的环境信息输入预先训练的轨迹预测模型,以使所述轨迹预测模型输出该待预测障碍物的未来运动轨迹。
本公开提供的一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述障碍物的轨迹预测方法。
本公开提供的一种无人驾驶设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述障碍物的轨迹预测方法。
附图说明
图1为本公开实施例提供的一种障碍物的轨迹预测方法的系统架构示意图;
图2为本公开实施例提供的一种障碍物的轨迹预测方法的流程示意图;
图3为本公开实施例提供的一种障碍物的轨迹预测装置的结构示意图;
图4为本公开实施例提供的无人驾驶设备的结构示意图。
具体实施方式
在对动态障碍物未来的行驶轨迹进行预测时,为了提高预测的准确性,除了考虑当前时刻该障碍物的状态信息,还需要考虑与之交互的其他障碍物(还有车辆自身)对该障碍物的影响。虽然存在将该障碍物与其他障碍物(还有车辆自身)之间的交互纳入考量的预测轨迹方法,但是,这种方法仅通过各障碍物及车辆的当前状态信息来表征各障碍物及车辆之间的交互。若仅通过基于当前状态信息的交互对未来运动轨迹进行预测,势必会影响障碍物轨迹预测的准确性。
在本公开中,通过车辆与各障碍物的历史状态信息和当前状态信息,确定了车辆与各障碍物共同作用下的当前交互特征(表征当前交互信息),并通过当前交互特征和车辆自身规划的未来运动轨迹确定车辆与各障碍物共同作用下的全局交互特征(表征未来交互信息),并根据全局交互特征对未来运动轨迹进行预测。由于车辆自身规划的未来运动轨迹是已知的,因此该已知的未来运动轨迹可作为一种先验知识来构造未来交互信 息,该未来交互信息的可信度较高。与使用当前交互信息相比,通过未来交互信息对障碍物的未来运动轨迹进行预测,准确性更高。
为使本公开的目的和优点更加清楚,下面将结合本公开具体实施例及相应的附图对本公开实施例进行清楚、完整地描述。显然,所描述的实施例仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
以下结合附图,详细说明本公开各实施例提供的实施例。
在本公开实施例中,可采用如图1所示的系统架构对障碍物的轨迹进行预测,该系统架构主要包括交互网络和预测网络两部分。
交互网络用于针对车辆以及各障碍物,根据车辆历史状态信息和当前状态信息,确定车辆的运动特征;根据车辆的当前状态信息和各障碍物的当前状态信息,确定车辆与各障碍物的位置特征;根据车辆以及各障碍物的历史状态信息和当前状态信息,确定车辆以及各障碍物分别对应的隐变量,并根据车辆以及各障碍物分别对应的隐变量,确定车辆与各障碍物的跟踪特征;根据车辆的运动特征、车辆与各障碍物的位置特征、车辆与各障碍物的跟踪特征,确定当前交互特征;通过车辆自身规划的未来运动轨迹,确定车辆的未来运动轨迹特征;并根据当前交互特征和车辆的未来运动轨迹特征确定车辆与各障碍物的全局交互特征。通过交互网络确定全局交互特征之后,采用预测网络对各障碍物中的待预测障碍物的未来运动轨迹进行预测:首先根据待预测障碍物的当前状态信息和全局交互特征,确定个体交互特征;将确定的个体交互特征和车辆周围的环境信息输入到轨迹规划模型,以使轨迹规划模型输出该待预测障碍物的未来运动轨迹。
在一些例子中,该系统框架可以在车辆的电脑上实现,由电脑的中央处理器(CPU)执行存储在存储器上的相应的程序。在另一些例子中,该系统框架可以在能够与车辆交互的终端上实现,如智能手机、智能手表、笔记本电脑、专用计算机等设备。在再一些例子中,该系统框架可以在能够与车辆交互的云端设备上实现,如服务器、云端处理器等设备。本申请对此不作限制。为简单起见,后续以该系统框架在车辆的电脑上实现为例进行说明。
下面将结合附图,对上述过程进行详细说明。如图2所示,图2为本公开实施例提供的一种障碍物的轨迹预测方法的流程示意图,该流程示意图包括:
S100:监测车辆周围的一个或多个障碍物。
车辆在行驶的过程中,周围会有与之交互的各种障碍物。为了保证车辆的安全行驶,会对车辆周围的各障碍物进行监测,获取各障碍物的状态信息等数据进行分析。其中,状态信息可为:该障碍物所处的坐标(x,y)、该障碍物的速度(v)、该障碍物的加速度(a)等。需要说明的是,车辆周围与之交互的各障碍物的数量、种类等是呈动态变化的,即,某一时间段内与车辆交互的障碍物有a、b、c三个,那么另一时间段与车辆交互的障碍物可能就变成了a、c、d、e四个,因此需要车辆对与之交互的障碍物实时监测,及时更新采集的数据。
S102:针对每个障碍物,获取该障碍物的历史状态信息和当前状态信息。
该障碍物的状态信息可通过与该障碍物交互的车辆上的设备采集,例如由车辆上安装的摄像头、雷达等设备进行采集,通过障碍物自身安装的传感器采集并通过网络发给与之交互的车辆,或通过云端设备基于该障碍物所处的位置来确定并通过网络发给车辆。其中,当前状态信息可为该障碍物在当前时刻的状态信息;历史状态信息可为该障碍物在当前时刻的前一时刻的状态信息,也可为该障碍物在当前时刻的前一时间段内多个历史时刻的状态信息。无论是当前状态信息,还是历史状态信息,均是已知信息。
S104:根据车辆的历史状态信息和当前状态信息,以及每个障碍物的历史状态信息和当前状态信息,确定当前车辆与一个或多个障碍物共同作用下的当前交互特征。该当前交互特征表征当前时刻车辆与各障碍物的交互。
S106:获取车辆自身规划的未来运动轨迹,并根据未来运动轨迹确定车辆的未来运动轨迹特征。
车辆在行驶过程中,其自身规划的未来运动轨迹是已知的,该未来运动轨迹可作为一种先验知识。未来运动轨迹可为从当前时刻到下一时刻的运动轨迹,也可为从当前时刻到未来的多个时刻的运动轨迹形成的一段运动轨迹。以未来运动轨迹是从当前时刻到未来的多个时刻的运动轨迹形成的一段运动轨迹为例,将当前时刻定义为t,将未来的多个时刻分别定义为t+1、t+2、t+3,将t+1、t+2、t+3时刻车辆所处的位置坐标分别定义为P t+1 ego、P t+2 ego、P t+3 ego。在获取车辆自身规划的未来运动轨迹之后,可先从P t+1 ego、P t+2 ego、P t+3 ego中分别提取特征向量,将提取的各特征向量进行拼接,对拼接后的各特征向量进行最大池化处理,从而得到车辆的未来运动轨迹特征。
在一些示例中,可以使用向量化(Embedding)方法提取某一时刻的特征向量。形式上讲,Embedding是用一个低维稠密的向量表示一个对象,Embedding向量能够表达 相应对象的某些特征,同时向量之间的距离反映了对象之间的相似性。
S108:根据所确定的当前交互特征和车辆的未来运动轨迹特征,确定车辆与一个或多个障碍物共同作用下的全局交互特征。
通过将当前交互特征对应的特征向量与车辆的未来运动轨迹特征对应的特征向量拼接,可以得到车辆与各障碍物共同作用下的全局交互特征。如前所述,当前交互特征结合了历史状态信息和当前状态信息,并且用于表征当前时刻车辆与各障碍物的交互。全局交互特征在当前交互特征的基础上,增加了车辆自身的未来运动轨迹,结合了当前状态信息以及预测的未来状态信息,在一定程度上能够表征未来时刻车辆与各障碍物的交互。
S110:针对各障碍物中的待预测障碍物,根据该待预测障碍物的当前状态信息和所确定的全局交互特征,确定该待预测障碍物在共同作用中的个体交互特征。其中,所述待预测障碍物为各障碍物中的任一障碍物。
在对待预测障碍物的未来运动轨迹进行预测时,由于距离待预测障碍物较近的车辆或障碍物的交互特征对该待预测障碍物的轨迹预测影响较大,距离待预测障碍物较远的车辆或障碍物的交互特征对该待预测障碍物的轨迹预测影响较小,因此,仅通过全局交互特征对待预测障碍物的未来运动轨迹进行预测具有局限性。本公开实施例根据待预测障碍物的当前状态信息和全局交互特征,确定了待预测障碍物在车辆及各障碍物共同作用中的个体交互特征,该个体交互特征为全局交互特征的一部分,在一定程度上能够表征未来时刻待预测障碍物周边的交互信息,该交互信息包含了待预测障碍物的未来状态信息。
S112:将该待预测障碍物的个体交互特征和车辆周围的环境信息输入预先训练的轨迹预测模型,以使轨迹预测模型输出该待预测障碍物的未来运动轨迹。
待预测障碍物的环境信息不易获取,但是车辆周围的环境信息容易获取。例如,可通过与该待预测障碍物产生交互的车辆周围的环境信息来表征该待预测障碍物的环境信息。
在一些示例中,还可将该待预测障碍物的当前状态信息、待预测障碍物的个体交互特征和车辆周围的环境信息输入到预先训练的轨迹预测模型,以使轨迹预测模型输出该待预测障碍物的未来运动轨迹。需要说明的是,在对待预测障碍物的未来运动轨迹进行预测时,既可以根据当前时刻的状态信息,预测下一时刻的状态信息;还可以根据当前 时间段(包含多个时刻)的状态信息,预测未来时间段(包含多个时刻)的状态信息(即,一段运动轨迹)。
在图2的S104中,可根据车辆的当前状态信息以及每个障碍物的当前状态信息,确定车辆与各障碍物的位置特征。具体的,继续沿用上例,可根据车辆的当前状态信息提取特征向量。其中,该车辆的当前状态信息可通过当前t时刻车辆所处的位置坐标表征,将该位置坐标定义为P t ego。同样的,也可根据每个障碍物的当前状态信息提取特征向量,其中,每个障碍物的当前状态信息也可以用该障碍物所处的位置坐标表征。可用1、2、3......n代表各障碍物,将各障碍物的位置坐标依次定义为P t 1、P t 2、P t 3......P t n。可将车辆与各障碍物的当前状态信息对应的特征向量拼接,即将P t ego、P t 1、P t 2、P t 3......P t n对应的特征向量拼接,将拼接后的特征向量进行最大池化处理,得到车辆与各障碍物的位置特征。
在一些示例中,可以使用Embedding方法得到各状态信息对应的特征向量。在计算不同的特征向量时,所使用的权重矩阵可能不同。
在一些示例中,可根据车辆的历史状态信息和当前状态信息,以及每个障碍物的历史状态信息和当前状态信息,获取车辆和每个障碍物分别对应的隐变量;并根据车辆和每个障碍物分别对应的隐变量,确定车辆与各障碍物的跟踪特征。其中,隐变量用于表征车辆或每个障碍物从历史状态到当前状态的状态差异。由于车辆及不同的障碍物从历史状态到当前状态的状态差异均不同,因此,隐变量能够从一定程度上表征车辆及各障碍物的跟踪信息。如上所述,车辆周围与之交互的各障碍物的数量是呈动态变化的,因此,若通过对各障碍物采用编号的方式进行跟踪,无疑费时费力且跟踪效果较差。本公开实施例通过采用隐变量,无需对各障碍物进行编号便可得知与车辆交互的各障碍物之间的时序运动信息,即跟踪信息。通过该跟踪信息辅助轨迹预测,能够提高轨迹预测的准确性。另外,跟踪特征的确定方式与上述位置特征的确定方式相似,即先针对各隐变量提取特征向量,对提取的各特征向量进行拼接,并对拼接后的特征向量进行最大池化处理,便可得到跟踪特征,此处不再赘述。
为了整体结构的简洁,在一些示例中,可以使用预测网络中的模型来得到各个隐变量。
一般来说,上述的位置坐标信息均采用世界坐标系确定,因此本公开实施例适用于世界坐标系场景。本公开实施例还可以适用于车辆坐标系(即,以车辆自身为中心建立坐标系)。具体地,在对障碍物未来运动轨迹进行预测时,可根据车辆的历史状态信息 和当前状态信息,确定车辆的运动特征。车辆的运动特征表征了车辆从历史状态到当前状态的状态差异,在车辆坐标系中对障碍物的未来运动轨迹进行预测时,可参照车辆的运动特征确定车辆坐标系中的各位置坐标信息。
通过上述的方式确定了位置特征、跟踪特征、车辆的运动特征后,可将位置特征、跟踪特征、车辆的运动特征输入到门控循环单元(Gated Recurrent Unit,GRU)以进一步提取特征,从而从所提取的特征中进一步提取特征向量,最终获得当前车辆与各障碍物共同作用下的当前交互特征。还可将GRU模型替换为长短期记忆模型(Long Short-Term Memory,LSTM)提取特征,还可采用其他模型,本公开实施例对此不作限制。
在图2的S110中,确定个体交互特征的方式可以包括:确定与该待预测障碍物的当前状态信息对应的特征向量,作为该待预测障碍物的当前状态向量e t;确定与全局交互特征对应的特征向量,作为全局交互向量fst t;根据该待预测障碍物的当前状态向量e t与全局交互向量fst t的向量点乘,确定该待预测障碍物在共同作用中的个体交互特征q t。即,q t=fst t⊙e t,其中,q t表示个体交互特征对应的特征向量,⊙表示向量点乘。
在图2的S112中,可采集车辆周围当前环境的实际图像,并根据该实际图像,确定全局环境特征;根据该待预测障碍物在实际图像中的位置,确定该待预测障碍物对应的局部环境特征在全局环境特征中的位置,作为参考位置;确定该全局环境特征中参考位置对应的环境特征,作为该待预测障碍物对应的局部环境特征。将确定的该待预测障碍物对应的局部环境特征输入到预先训练的轨迹预测模型,能够进一步提高轨迹预测的准确性。上述确定局部环境特征的方式具体可通过ROI(Region of Interest)Align技术实现。另外,若采用俯视图的方式采集实际图像,会使得环境信息的有效性更强。
在一些示例中,还可将实际图像转换成抽象图像,去掉实际图像中一些无关的要素,比如周边的树木、房屋等,只保留关键要素,比如公路线路图、交通路线、交通信号灯等信息。简化的信息能够提高预测效率。将实际图像转换成抽象图像的方式可以包括:识别实际图像中包含的各关键要素,确定各关键要素在实际图像中所处的位置;针对每个关键要素,根据该关键要素在实际图像中所处的位置,以及与该关键要素相匹配的预设模型,生成实际图像对应的抽象图像。将所生成的抽象图像输入到预先训练的环境模型,以使环境模型根据抽象图像输出全局环境特征。
通过上述的方式获取了待预测障碍物的个体交互特征和车辆周围的环境信息之后,可将所述个体交互特征和环境信息输入到预先训练的轨迹预测模型,以使轨迹预测模型 输出待预测障碍物的未来运动轨迹。其中,预先训练的轨迹预测模型可为包含编码端和解码端的LSTM模型。可根据该待预测障碍物的历史状态信息和当前状态信息,确定该待预测障碍物从历史状态到当前状态的状态差异。将该待预测障碍物的个体交互特征、车辆周围的环境信息和该待预测障碍物从历史状态到当前状态的状态差异输入编码端,以使编码端输出该待预测障碍物对应的隐变量。将该待预测障碍物对应的隐变量、该待预测障碍物的个体交互特征、车辆周围的环境信息和该待预测障碍物从历史状态到当前状态的状态差异输入解码端,以使解码端输出该待预测障碍物的未来运动轨迹。
在一些示例中,可根据车辆的历史状态信息和当前状态信息,获取车辆对应的隐变量。具体的,可根据车辆自身的历史状态信息和当前状态信息,确定车辆从历史状态到当前状态的状态差异;将车辆的个体交互特征、车辆周围的环境信息、车辆从历史状态到当前状态的状态差异输入到编码端,以使编码端输出车辆对应的隐变量;其中,车辆的个体交互特征根据车辆的当前状态信息和全局交互特征获得。
在一些示例中,可根据每个障碍物的历史状态信息和当前状态信息,获取每个障碍物分别对应的隐变量。具体的,针对每个障碍物,可根据该障碍物的历史状态信息和当前状态信息,确定该障碍物从历史状态到当前状态的状态差异;将该障碍物的个体交互特征、车辆周围的环境信息、该障碍物从历史状态到当前状态的状态差异输入到编码端,以使编码端输出该障碍物对应的隐变量;其中,该障碍物的个体交互特征根据该障碍物的当前状态信息和全局交互特征获得。
在本公开实施例中,轨迹预测模型可为LSTM模型,环境模型可为卷积神经网络(Convolutional Neural Networks,CNN)模型。轨迹预测模型和环境模型也可以采用其他模型,本公开实施例对此不作限制。
本公开实施例通过车辆及各障碍物的历史状态信息和当前状态信息确定车辆与各障碍物的当前交互特征(通过历史和当前来表征当前交互信息)。在当前交互特征的基础上,加入了车辆自身规划的未来运动轨迹作为先验知识,得到全局交互特征(通过当前和未来表征未来交互信息)。通过全局交互特征和待预测障碍物的当前状态信息,确定个体交互特征(即,全局交互特征的一部分,表征待预测障碍物周围的未来交互信息),并基于此,对待预测障碍物的未来运动轨迹进行预测。本公开实施例通过全局交互特征来表征车辆和各障碍物之间的未来交互信息,在对待预测障碍物的轨迹进行预测时,不仅考虑了车辆和各障碍物之间的交互对待预测障碍物未来运动轨迹的影响,还参考了车辆自身规划的未来运动轨迹。由于车辆自身规划的未来运动轨迹是已知的,所以在本公 开中可作为一种先验知识,在一定程度上能够表征车辆和各障碍物未来的交互。通过该种方式,使得预测的未来运动轨迹更接近实际轨迹。当处于交通状况较为复杂的环境中时,也能对障碍物的未来运动轨迹进行更准确的预测。
本公开实施例提供的障碍物的轨迹预测方法能够预测出障碍物在未来如何行驶,便于车辆准确避障。该方法还能够为车辆自身规划路径提供修正参考,即,先通过车辆自身规划的未来运动轨迹作为先验知识,采用该先验知识辅助障碍物预测未来运动轨迹,再通过各障碍物的未来运动轨迹对车辆自身规划的未来运动轨迹(即,先验知识)进行修正,使得车辆自身的路径规划更准确。所述轨迹预测方法还可应用于其他领域,本公开实施例对此不作限制。
本公开提供的上述障碍物的轨迹预测方法具体可用于针对无人车的路径规划或者无人车的避障。无人车可以为无人配送车,该无人配送车可以应用于使用无人配送车进行配送的领域,如,使用无人配送车进行快递、外卖等配送的场景。具体的,在上述的场景中,可使用多个无人配送车所构成的自动驾驶车队进行配送。所述方法可以应用于例如上述无人车的自动驾驶设备中,或应用于与自动驾驶设备通信的服务器或云计算设备中。
以上为本公开实施例提供的障碍物的轨迹预测方法,基于同样的思路,本公开还提供了相应的装置、存储介质和无人驾驶设备。
图3为本公开实施例提供的一种障碍物的轨迹预测装置的结构示意图,所述装置包括:
监测模块200,用于监测车辆周围的各障碍物;
获取模块202,用于针对每个障碍物,获取该障碍物的历史状态信息和当前状态信息;
当前交互特征确定模块204,用于根据所述车辆的历史状态信息和当前状态信息,以及每个障碍物的历史状态信息和当前状态信息,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征;
未来运动轨迹特征确定模块206,用于获取所述车辆自身规划的未来运动轨迹,并根据所述未来运动轨迹确定所述车辆的未来运动轨迹特征;
全局交互特征确定模块208,用于根据所确定的所述当前交互特征和所述车辆的未来运动轨迹特征,确定所述车辆与所述一个或多个障碍物共同作用下的全局交互特征;
个体交互特征确定模块210,用于针对所述一个或多个障碍物中的待预测障碍物,根据该待预测障碍物的当前状态信息和所述全局交互特征,确定该待预测障碍物在所述共同作用中的个体交互特征;
预测模块212,用于将该待预测障碍物的个体交互特征和所述车辆周围的环境信息输入预先训练的轨迹预测模型,以使所述轨迹预测模型输出该待预测障碍物的未来运动轨迹。
可选的,所述当前交互特征确定模块204,用于根据所述车辆的当前状态信息以及每个障碍物的当前状态信息,确定所述车辆与每个障碍物的位置特征;根据所述车辆的历史状态信息和当前状态信息,以及所述每个障碍物的历史状态信息和当前状态信息,获取所述车辆和所述每个障碍物分别对应的隐变量,并根据所述车辆和所述每个障碍物分别对应的隐变量,确定所述车辆与每个障碍物的跟踪特征,其中,所述隐变量用于表征所述车辆或所述每个障碍物从历史状态到当前状态的状态差异;根据所述车辆的历史状态信息和当前状态信息,确定所述车辆的运动特征;根据所述位置特征、所述跟踪特征、所述车辆的运动特征,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征。
可选的,所述个体交互特征确定模块210,用于确定与该待预测障碍物的当前状态信息对应的特征向量,作为该待预测障碍物的当前状态向量;确定与所述全局交互特征对应的特征向量,作为全局交互向量;根据该待预测障碍物的当前状态向量与所述全局交互向量的向量点乘,确定该待预测障碍物在所述共同作用中的所述个体交互特征。
可选的,所述预测模块212,用于采集所述车辆周围当前环境的实际图像;根据所述实际图像,确定全局环境特征;根据该待预测障碍物在所述实际图像中的位置,确定该待预测障碍物对应的局部环境特征在所述全局环境特征中的位置,作为参考位置;确定所述全局环境特征中所述参考位置对应的环境特征,作为该待预测障碍物对应的局部环境特征;将所确定的该待预测障碍物对应的局部环境特征输入到所述预先训练的轨迹预测模型。
可选的,所述预测模块212,还用于识别所述实际图像中包含的各关键要素;确定各关键要素在所述实际图像中所处的位置;根据各关键要素在所述实际图像中所处的位置,以及与各关键要素分别匹配的预设模型,生成所述实际图像对应的抽象图像;根据所述抽象图像,确定所述全局环境特征。
可选的,所述预先训练的轨迹预测模型为包含编码端和解码端的LSTM模型。所述预测模块212,还用于根据该待预测障碍物的历史状态信息和当前状态信息,确定该待预测障碍物从历史状态到当前状态的状态差异;将该待预测障碍物的个体交互特征、所述车辆周围的环境信息和该待预测障碍物从历史状态到当前状态的状态差异输入所述编码端,以使所述编码端输出该待预测障碍物对应的隐变量;将该待预测障碍物对应的隐变量、该待预测障碍物的个体交互特征、所述车辆周围的环境信息和该待预测障碍物从历史状态到当前状态的状态差异输入所述解码端,以使所述解码端输出该待预测障碍物的未来运动轨迹。
可选的,所述当前交互特征确定模块204,还用于根据所述车辆的历史状态信息和当前状态信息,确定所述车辆从历史状态到当前状态的状态差异;将所述车辆的个体交互特征、所述车辆周围的环境信息、所述车辆从历史状态到当前状态的状态差异输入到所述编码端,以使所述编码端输出所述车辆对应的隐变量;其中,所述车辆的个体交互特征根据所述车辆的当前状态信息和所述全局交互特征获得。
可选的,所述当前交互特征确定模块204,还用于针对每个障碍物,根据该障碍物的历史状态信息和当前状态信息,确定该障碍物从历史状态到当前状态的状态差异;将该障碍物的个体交互特征、所述车辆周围的环境信息、该障碍物从历史状态到当前状态的状态差异输入到所述编码端,以使所述编码端输出该障碍物对应的隐变量;其中,该障碍物的个体交互特征根据该障碍物的当前状态信息和所述全局交互特征获得。
本公开还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序被控制器执行时,可促使所述控制器实现如上述图2所示的障碍物的轨迹预测方法。
基于图2所示的障碍物的轨迹预测方法,本公开实施例还提供了图4所示的无人驾驶设备的结构示意图。如图4,在硬件层面,该无人驾驶设备包括处理器、内部总线、网络接口、内存以及非易失性存储器,还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的指令到内存中然后运行,以实现上述图2所述的障碍物的轨迹预测方法。
除了软件实现方式之外,本公开并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现, 或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。在实施本公开时可以把各单元的功能集成在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图中的一个流程或多个流程和/或方框图中的一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图中的一个流程或多个流程和/或方框图中的一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图中的一个流程或多个流程和/或方框图中的一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/ 或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带、磁带、磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本公开可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本公开,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本公开中的各个实施例均采用递进的方式描述,各个实施例之间相同或相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本公开的实施例而已,并不用于限制本公开。对于本领域技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本公开的权利要求范围之内。

Claims (10)

  1. 一种障碍物的轨迹预测方法,其特征在于,包括:
    监测车辆周围的一个或多个障碍物;
    针对每个障碍物,获取该障碍物的历史状态信息和当前状态信息;
    根据所述车辆的历史状态信息和当前状态信息,以及每个障碍物的历史状态信息和当前状态信息,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征;
    获取所述车辆自身规划的未来运动轨迹,并根据所述未来运动轨迹确定所述车辆的未来运动轨迹特征;
    根据所确定的所述当前交互特征和所述车辆的未来运动轨迹特征,确定所述车辆与所述一个或多个障碍物共同作用下的全局交互特征;
    针对所述一个或多个障碍物中的待预测障碍物,根据该待预测障碍物的当前状态信息和所述全局交互特征,确定该待预测障碍物在所述共同作用中的个体交互特征;
    将该待预测障碍物的个体交互特征和所述车辆周围的环境信息输入预先训练的轨迹预测模型,以使所述轨迹预测模型输出该待预测障碍物的未来运动轨迹。
  2. 如权利要求1所述的方法,其特征在于,根据所述车辆的历史状态信息和当前状态信息,以及所述每个障碍物的历史状态信息和当前状态信息,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征包括:
    根据所述车辆的当前状态信息以及每个障碍物的当前状态信息,确定所述车辆与每个障碍物的位置特征;
    根据所述车辆的历史状态信息和当前状态信息,以及所述每个障碍物的历史状态信息和当前状态信息,获取所述车辆和所述每个障碍物分别对应的隐变量,并根据所述车辆和所述每个障碍物分别对应的隐变量,确定所述车辆与每个障碍物的跟踪特征,其中,所述隐变量用于表征所述车辆或所述每个障碍物从历史状态到当前状态的状态差异;
    根据所述车辆的历史状态信息和当前状态信息,确定所述车辆的运动特征;
    根据所述位置特征、所述跟踪特征、所述车辆的运动特征,确定当前所述车辆与所述一个或多个障碍物共同作用下的当前交互特征。
  3. 如权利要求1所述的方法,其特征在于,根据该待预测障碍物的当前状态信息和所述全局交互特征,确定该待预测障碍物在所述共同作用中的个体交互特征包括:
    确定与该待预测障碍物的当前状态信息对应的特征向量,作为该待预测障碍物的当前状态向量;
    确定与所述全局交互特征对应的特征向量,作为全局交互向量;
    根据该待预测障碍物的当前状态向量与所述全局交互向量的向量点乘,确定该待预测障碍物在所述共同作用中的所述个体交互特征。
  4. 如权利要求1所述的方法,其特征在于,将所述车辆周围的环境信息输入预先训练的轨迹预测模型包括:
    采集所述车辆周围当前环境的实际图像;
    根据所述实际图像,确定全局环境特征;
    根据该待预测障碍物在所述实际图像中的位置,确定该待预测障碍物对应的局部环境特征在所述全局环境特征中的位置,作为参考位置;
    确定所述全局环境特征中所述参考位置对应的环境特征,作为该待预测障碍物对应的局部环境特征;
    将所确定的该待预测障碍物对应的局部环境特征输入到所述预先训练的轨迹预测模型。
  5. 如权利要求4所述的方法,其特征在于,根据所述实际图像,确定全局环境特征包括:
    识别所述实际图像中包含的各关键要素;
    确定各关键要素在所述实际图像中所处的位置;
    根据各关键要素在所述实际图像中所处的位置,以及与各关键要素分别匹配的预设模型,生成所述实际图像对应的抽象图像;
    根据所述抽象图像,确定所述全局环境特征。
  6. 如权利要求2所述的方法,其特征在于,所述预先训练的轨迹预测模型为包含编码端和解码端的长短期记忆模型LSTM;
    将该待预测障碍物的个体交互特征和所述车辆周围的环境信息输入所述预先训练的轨迹预测模型,以使所述轨迹预测模型输出该待预测障碍物的未来运动轨迹包括:
    根据该待预测障碍物的历史状态信息和当前状态信息,确定该待预测障碍物从历史状态到当前状态的状态差异;
    将该待预测障碍物的个体交互特征、所述车辆周围环境信息和该待预测障碍物的所述状态差异输入所述编码端,以使所述编码端输出该待预测障碍物对应的隐变量;
    将该待预测障碍物对应的隐变量、该待预测障碍物的个体交互特征、所述车辆周围的环境信息和该待预测障碍物的所述状态差异输入所述解码端,以使所述解码端输出该待预测障碍物的未来运动轨迹。
  7. 如权利要求6所述的方法,其特征在于,根据所述车辆的历史状态信息和当前状态信息,获取所述车辆对应的隐变量包括:
    根据所述车辆的历史状态信息和当前状态信息,确定所述车辆从历史状态到当前状态的状态差异;
    将所述车辆的个体交互特征、所述车辆周围的环境信息、所述车辆的所述状态差异输入到所述编码端,以使所述编码端输出所述车辆对应的所述隐变量;
    其中,所述车辆的个体交互特征是根据所述车辆的当前状态信息和所述全局交互特征获得的;
    根据每个障碍物的历史状态信息和当前状态信息,获取每个障碍物分别对应的隐变量包括:
    针对每个障碍物,根据该障碍物的历史状态信息和当前状态信息,确定该障碍物从历史状态到当前状态的状态差异;
    将该障碍物的个体交互特征、所述车辆周围的环境信息、该障碍物从历史状态到当前状态的状态差异输入到所述编码端,以使所述编码端输出该障碍物对应的隐变量。
  8. 一种障碍物的轨迹预测装置,其特征在于,包括:
    监测模块,用于监测车辆周围的各障碍物;
    获取模块,用于针对每个障碍物,获取该障碍物的历史状态信息和当前状态信息;
    当前交互特征确定模块,用于根据所述车辆的历史状态信息和当前状态信息,以及每个障碍物的历史状态信息和当前状态信息,确定当前所述车辆与各障碍物共同作用下的当前交互特征;
    未来运动轨迹特征确定模块,用于获取所述车辆自身规划的未来运动轨迹,并根据所述未来运动轨迹确定所述车辆的未来运动轨迹特征;
    全局交互特征确定模块,用于根据所确定的所述当前交互特征和所述车辆的未来运动轨迹特征,确定所述车辆与各障碍物共同作用下的全局交互特征;
    个体交互特征确定模块,用于针对各障碍物中的待预测障碍物,根据该待预测障碍物的当前状态信息和所确定的所述全局交互特征,确定该待预测障碍物在所述共同作用中的个体交互特征;
    预测模块,用于将该待预测障碍物的个体交互特征和所述车辆周围的环境信息输入预先训练的轨迹预测模型,以使所述轨迹预测模型输出该待预测障碍物的未来运动轨迹。
  9. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所 述计算机程序被控制器执行时,可促使所述控制器实现如上述权利要求1-7中任一项所述的障碍物的轨迹预测方法。
  10. 一种无人驾驶设备,包括:
    处理器;以及
    用于存储可由所述处理器执行的指令的存储器,其中,所述指令在被执行时,促使所述处理器实现上述权利要求1-7中任一项所述的障碍物的轨迹预测方法。
PCT/CN2021/082310 2020-03-23 2021-03-23 一种障碍物的轨迹预测方法及装置 WO2021190484A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/908,918 US20230100814A1 (en) 2020-03-23 2021-03-23 Obstacle trajectory prediction method and apparatus
EP21777097.3A EP4131062A4 (en) 2020-03-23 2021-03-23 METHOD AND APPARATUS FOR PREDICTING OBSTACLE TRAJECTORY

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010204950.1 2020-03-23
CN202010204950.1A CN111079721B (zh) 2020-03-23 2020-03-23 一种障碍物的轨迹预测方法及装置

Publications (1)

Publication Number Publication Date
WO2021190484A1 true WO2021190484A1 (zh) 2021-09-30

Family

ID=70324656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082310 WO2021190484A1 (zh) 2020-03-23 2021-03-23 一种障碍物的轨迹预测方法及装置

Country Status (4)

Country Link
US (1) US20230100814A1 (zh)
EP (1) EP4131062A4 (zh)
CN (1) CN111079721B (zh)
WO (1) WO2021190484A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113844442A (zh) * 2021-10-22 2021-12-28 大连理工大学 一种无人运输的全局多源感知融合及局部避障方法和系统
CN114518762A (zh) * 2022-04-20 2022-05-20 长沙小钴科技有限公司 机器人避障模型、避障控制方法和机器人
CN117389276A (zh) * 2023-11-05 2024-01-12 理工雷科智途(北京)科技有限公司 一种基于行驶风险预测的无人车行驶路径跟踪控制方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079721B (zh) * 2020-03-23 2020-07-03 北京三快在线科技有限公司 一种障碍物的轨迹预测方法及装置
CN111982143B (zh) * 2020-08-11 2024-01-19 北京汽车研究总院有限公司 车辆及车辆路径规划方法、装置
CN111976726B (zh) * 2020-08-26 2022-01-18 中南大学 一种智轨车辆的转向辅助系统及其控制方法
CN112629550B (zh) * 2020-10-13 2024-03-01 北京三快在线科技有限公司 一种预测障碍物轨迹以及模型训练的方法及装置
CN112306059B (zh) * 2020-10-15 2024-02-27 北京三快在线科技有限公司 一种控制模型的训练方法、控制方法以及装置
CN112651557A (zh) * 2020-12-25 2021-04-13 际络科技(上海)有限公司 轨迹预测系统及方法、电子设备及可读存储介质
CN112766468B (zh) * 2021-04-08 2021-07-30 北京三快在线科技有限公司 一种轨迹预测方法、装置、存储介质及电子设备
CN113156947B (zh) * 2021-04-14 2024-03-08 武汉理工大学 一种船舶在动态环境下的路径规划的方法
CN113246973B (zh) * 2021-06-11 2022-05-03 北京三快在线科技有限公司 障碍物的轨迹预测方法、装置、存储介质及电子设备
CN113740837B (zh) * 2021-09-01 2022-06-24 广州文远知行科技有限公司 一种障碍物跟踪方法、装置、设备及存储介质
CN117409566A (zh) * 2022-07-06 2024-01-16 华为技术有限公司 轨迹预测方法及其装置、介质、程序产品和电子设备
CN115214724B (zh) * 2022-09-20 2022-12-09 毫末智行科技有限公司 一种轨迹预测的方法、装置、电子设备及存储介质
CN115534939B (zh) * 2022-12-06 2023-04-14 广汽埃安新能源汽车股份有限公司 车辆控制方法、装置、电子设备和计算机可读介质
CN116309689B (zh) * 2023-05-17 2023-07-28 上海木蚁机器人科技有限公司 障碍物轨迹预测方法、装置、设备和介质
CN116842392B (zh) * 2023-08-29 2024-04-16 新石器慧通(北京)科技有限公司 轨迹预测方法及其模型的训练方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190185013A1 (en) * 2017-12-18 2019-06-20 PlusAI Corp Method and system for ensemble vehicle control prediction in autonomous driving vehicles
CN110187707A (zh) * 2019-05-30 2019-08-30 北京三快在线科技有限公司 无人驾驶设备运行轨迹的规划方法、装置及无人驾驶设备
CN110275531A (zh) * 2019-06-21 2019-09-24 北京三快在线科技有限公司 障碍物的轨迹预测方法、装置及无人驾驶设备
CN111079721A (zh) * 2020-03-23 2020-04-28 北京三快在线科技有限公司 一种障碍物的轨迹预测方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190185010A1 (en) * 2017-12-18 2019-06-20 PlusAI Corp Method and system for self capability aware route planning in autonomous driving vehicles
US11514293B2 (en) * 2018-09-11 2022-11-29 Nvidia Corporation Future object trajectory predictions for autonomous machine applications
CN110371112B (zh) * 2019-07-06 2021-10-01 深圳数翔科技有限公司 一种自动驾驶车辆的智能避障系统及方法
CN110674723B (zh) * 2019-09-19 2022-07-15 北京三快在线科技有限公司 一种确定无人驾驶车辆行驶轨迹的方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190185013A1 (en) * 2017-12-18 2019-06-20 PlusAI Corp Method and system for ensemble vehicle control prediction in autonomous driving vehicles
CN110187707A (zh) * 2019-05-30 2019-08-30 北京三快在线科技有限公司 无人驾驶设备运行轨迹的规划方法、装置及无人驾驶设备
CN110275531A (zh) * 2019-06-21 2019-09-24 北京三快在线科技有限公司 障碍物的轨迹预测方法、装置及无人驾驶设备
CN111079721A (zh) * 2020-03-23 2020-04-28 北京三快在线科技有限公司 一种障碍物的轨迹预测方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4131062A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113844442A (zh) * 2021-10-22 2021-12-28 大连理工大学 一种无人运输的全局多源感知融合及局部避障方法和系统
CN114518762A (zh) * 2022-04-20 2022-05-20 长沙小钴科技有限公司 机器人避障模型、避障控制方法和机器人
CN114518762B (zh) * 2022-04-20 2022-07-22 长沙小钴科技有限公司 机器人避障装置、避障控制方法和机器人
CN117389276A (zh) * 2023-11-05 2024-01-12 理工雷科智途(北京)科技有限公司 一种基于行驶风险预测的无人车行驶路径跟踪控制方法
CN117389276B (zh) * 2023-11-05 2024-05-28 理工雷科智途(北京)科技有限公司 一种基于行驶风险预测的无人车行驶路径跟踪控制方法

Also Published As

Publication number Publication date
US20230100814A1 (en) 2023-03-30
CN111079721A (zh) 2020-04-28
EP4131062A1 (en) 2023-02-08
EP4131062A4 (en) 2024-05-01
CN111079721B (zh) 2020-07-03

Similar Documents

Publication Publication Date Title
WO2021190484A1 (zh) 一种障碍物的轨迹预测方法及装置
KR102335389B1 (ko) 자율 주행 차량의 lidar 위치 추정을 위한 심층 학습 기반 특징 추출
KR102350181B1 (ko) 자율 주행 차량에서 rnn 및 lstm을 사용하여 시간적 평활화를 수행하는 lidar 위치 추정
WO2021052185A1 (zh) 确定智能驾驶车辆的行驶轨迹
KR102292277B1 (ko) 자율 주행 차량에서 3d cnn 네트워크를 사용하여 솔루션을 추론하는 lidar 위치 추정
CN111190427B (zh) 一种轨迹规划的方法及装置
US10671076B1 (en) Trajectory prediction of third-party objects using temporal logic and tree search
Petrich et al. Map-based long term motion prediction for vehicles in traffic environments
AU2019251362A1 (en) Techniques for considering uncertainty in use of artificial intelligence models
CA3096415A1 (en) Dynamically controlling sensor behavior
CN110262486B (zh) 一种无人驾驶设备运动控制方法及装置
CN111126362B (zh) 一种预测障碍物轨迹的方法及装置
CN111238523A (zh) 一种运动轨迹的预测方法及装置
US11767030B1 (en) Scenario simulation execution within a truncated parameter space
Liu et al. Vision-IMU multi-sensor fusion semantic topological map based on RatSLAM
Liu et al. Prediction and Routing
Isong et al. Deep Learning-Based Object Detection Techniques for Self-Driving Cars: an in-Depth Analysis
Nivash MULTI-AGENT TRAJECTORY PREDICTION FOR AUTONOMOUS VEHICLES
Huang et al. iCOIL: Scenario Aware Autonomous Parking Via Integrated Constrained Optimization and Imitation Learning
Sun et al. SparseDrive: End-to-End Autonomous Driving via Sparse Scene Representation
Wang et al. A Deep Analysis of Visual SLAM Methods for Highly Automated and Autonomous Vehicles in Complex Urban Environment
CN117609877A (zh) 训练基于lstm的车辆轨迹预测模型的方法、装置及设备
CN112393723A (zh) 一种定位方法、设备、介质及无人设备
CN117944719A (zh) 一种车辆轨迹规划方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21777097

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021777097

Country of ref document: EP

Effective date: 20221024