WO2021204092A1 - 轨迹预测方法、装置、设备及存储介质资源 - Google Patents

轨迹预测方法、装置、设备及存储介质资源 Download PDF

Info

Publication number
WO2021204092A1
WO2021204092A1 PCT/CN2021/085448 CN2021085448W WO2021204092A1 WO 2021204092 A1 WO2021204092 A1 WO 2021204092A1 CN 2021085448 W CN2021085448 W CN 2021085448W WO 2021204092 A1 WO2021204092 A1 WO 2021204092A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving object
trajectory
candidate
end point
information
Prior art date
Application number
PCT/CN2021/085448
Other languages
English (en)
French (fr)
Inventor
方良骥
蒋沁宏
石建萍
Original Assignee
商汤集团有限公司
本田技研工业株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 商汤集团有限公司, 本田技研工业株式会社 filed Critical 商汤集团有限公司
Priority to JP2022519830A priority Critical patent/JP7338052B2/ja
Publication of WO2021204092A1 publication Critical patent/WO2021204092A1/zh
Priority to US17/703,268 priority patent/US20220212693A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/05Type of road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4029Pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle

Definitions

  • the embodiments of the present disclosure relate to the field of automatic driving technology, and relate to, but are not limited to, a trajectory prediction method, device, equipment, and storage medium.
  • the embodiments of the present disclosure provide a trajectory prediction method, device, equipment, and storage medium.
  • the embodiment of the present disclosure provides a trajectory prediction method, which is applied to an electronic device, and the method includes:
  • a candidate trajectory set including a plurality of candidate trajectories is determined; wherein the position information of the end points of at least two of the candidate trajectories is different from the position information of the reference end point ;
  • the target trajectory of the moving object is determined from the set of candidate trajectories.
  • the location information of the moving object includes: time-series location information of the moving object, or the historical trajectory of the moving object.
  • the candidate trajectory of the moving object can be predicted based on the historical trajectory and time-series position information of the moving object.
  • the reference end point includes points other than a preset restriction type, where the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians.
  • the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians.
  • the determining the location information of the reference end point of the moving object according to the location information of the moving object includes: acquiring the environment information of the moving object according to the location information of the moving object, and the environment information It includes at least one of the following: road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information; and determining the position information of the reference end point of the moving object according to the environmental information. In this way, by combining the environment information around the moving object with the position information of the moving object, the position information of the reference end point of the moving object can be accurately predicted.
  • the acquiring environmental information of the mobile object according to the position information of the mobile object includes: determining the environmental information according to the image information collected by the mobile object; and/or, according to the The mobile object receives the communication information that characterizes the current environment, and determines the environment information. In this way, by analyzing the communication information and image information of the moving object, it is possible to exclude points included in the preset restriction type from the reference end point, thereby obtaining the reference end point of the moving object.
  • the determining the location information of the reference end point of the moving object according to the location information of the moving object includes: determining at least one reference route of the moving object according to the location information of the moving object; The reference route determines the location information of the reference destination. In this way, the accuracy of the predicted reference end point can be improved.
  • the determining the location information of the reference end point according to the reference route includes: determining a drivable area of the reference route; and determining that the moving object is at The position information of the reference end point in the drivable area. In this way, the effectiveness of the drivable area on each reference route is improved.
  • the determining the location information of the reference end point of the moving object according to the location information of the moving object includes: determining the intersection information of the road where the moving object is located according to the location information of the moving object; responding Where the intersection information indicates that there are at least two intersections, the position information of multiple reference end points of the moving object is determined; wherein, the reference end points of different intersections are different. In this way, missing reference endpoints can be reduced, thereby improving the accuracy of the determined target trajectory.
  • the determining the target trajectory of the moving object according to the candidate trajectory set includes: determining the confidence of the candidate trajectory in the candidate trajectory set; The confidence level determines the target trajectory of the moving object from the candidate trajectory set. In this way, an optimal trajectory is selected from the multiple candidate trajectories as the target trajectory for the moving object to travel, so as to more accurately estimate the future motion trajectory of the moving object.
  • the method before determining the target trajectory of the moving object from the candidate trajectory set according to the driving information of the moving object and the confidence level, the method further includes: determining that at least the candidate trajectory set is at least A trajectory parameter correction value of a candidate trajectory; adjusting the candidate trajectories in the candidate trajectory set according to the trajectory parameter correction value to obtain an updated candidate trajectory set; according to the driving information of the moving object and the confidence level, The target trajectory of the moving object is determined from the updated set of candidate trajectories. In this way, the candidate trajectory is adjusted according to the trajectory parameter correction value to improve the rationality of the obtained target trajectory.
  • determining the target trajectory of the moving object from the updated set of candidate trajectories according to the driving information of the moving object and the confidence level includes: according to the environmental information of the moving object and/ Or the control information of the moving object determines the drivable area of the moving object; according to the drivable area and the confidence, the target trajectory of the moving object is determined from the updated set of candidate trajectories. In this way, the purpose of screening candidate trajectories can be achieved.
  • the determining the drivable area of the mobile object according to the environmental information of the mobile object and/or the control information of the mobile object includes: determining the travelable area of the mobile object according to the environmental information of the mobile object.
  • the predicted drivable area of the moving object; according to the control information of the moving object, the predicted drivable area is adjusted to obtain the drivable area. In this way, the predicted drivable area is reduced by the control information of the moving object, and a more accurate drivable area is obtained.
  • determining the target trajectory of the moving object from the updated candidate trajectory set according to the drivable area and the confidence level includes: determining that the updated candidate trajectory set is included in the The candidate trajectories of the drivable area obtain the set of target trajectories to be determined; the trajectory with the highest confidence level or the trajectory with a confidence level greater than a preset confidence threshold in the set of target trajectories to be determined is determined as the target trajectory. In this way, the candidate trajectory with the greatest confidence in the determined target trajectory set is used as the target trajectory, which sufficiently improves the accuracy of the predicted target trajectory of the vehicle.
  • the determining a candidate trajectory set including a plurality of candidate trajectories according to the position information of the moving object and the position information of the reference end point includes: in a preset area including the reference end point, Determine M estimated end points; according to the position information of the moving object, the M estimated end points and N preset distances, correspondingly generate M ⁇ N candidate trajectories to obtain the candidate trajectory set; wherein, the preset The distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object to the candidate trajectory; wherein, M and N are both integers greater than 0. In this way, through multiple pre-estimated points and estimated end points, multiple candidate trajectories can be fitted.
  • the determining M estimated end points in the preset area including the reference end point includes: determining the preset area of the reference end point according to the width of the road where the reference end point is located; The preset area of the reference endpoint is divided into M grids of predetermined size, and the centers of the M grids are used as the M estimated endpoints. In this way, using the centers of the M grids as the M estimated end points can improve the accuracy of predicting the possible end points of the candidate trajectory.
  • the step of correspondingly generating M ⁇ N candidate trajectories according to the position information of the moving object, the M estimated end points and N preset distances to obtain the candidate trajectory set includes: determining all the candidate trajectories. The midpoint between the last sampling point in the position information of the moving object and the reference end point; N pre-estimation points are determined according to the N preset distances and the midpoint; according to the N pre-estimations The estimated points and the M estimated end points are generated to generate M ⁇ N candidate trajectories; according to the environmental information, the M ⁇ N candidate trajectories are screened to obtain the candidate trajectory set. In this way, by setting constraint conditions, the trajectories that do not meet the constraint conditions among the M ⁇ N candidate trajectories are eliminated, and a more accurate set of candidate trajectories is obtained.
  • determining the position information of the reference end point of the moving object according to the position information of the moving object includes: predicting the candidate end point of the moving object according to the position information of the moving object through a neural network; The candidate end point determines the position information of the reference end point of the moving object. In this way, using a trained neural network to predict the reference endpoint of a moving object can not only improve the accuracy of the prediction, but also speed up the prediction.
  • predicting the candidate end point of the moving object based on the location information of the moving object through a neural network includes: inputting the location information of the moving object into a first neural network to predict the first location of the moving object.
  • Candidate end point; the determining the position information of the reference end point of the moving object according to the candidate end point includes: determining the position of the reference end point of the moving object according to the first candidate end point and the environment information of the moving object information. In this way, the predicted first candidate end point that overlaps with the pedestrian or the sidewalk or exceeds the edge of the road is adjusted to obtain the position information of the reference end point with higher accuracy.
  • predicting the candidate destination of the mobile object based on the position information of the mobile object through a neural network includes: inputting the position information and environment information of the mobile object into a second neural network to predict the mobile object The second candidate end point of the; said determining the location information of the reference end point of the moving object according to the candidate end point includes: determining the reference end point of the moving object according to the second candidate end point and the environment information Location information.
  • the neural network training method includes: inputting the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain the first Predict the end point; according to the true value trajectory of the moving object, determine the first prediction loss of the neural network with respect to the first prediction end point; according to the first prediction loss, adjust the network parameters of the neural network for training The neural network.
  • the first prediction loss is used to adjust parameters such as the weight of the neural network, so that the adjusted neural network classification result is more accurate.
  • the training method of the neural network includes: inputting the location information of the moving object and the map information corresponding to the location information into the neural network to obtain a second predicted end point; Determine the second prediction loss of the neural network with respect to the second prediction end point; determine the deviation between the second prediction end point and the preset constraint condition; according to the deviation, determine the second prediction loss of the second prediction end point; The second prediction loss of the second prediction endpoint is adjusted to obtain the third prediction loss; and the network parameters of the neural network are adjusted according to the third prediction loss to train the neural network. In this way, the accuracy of the target trajectory output by the neural network is higher.
  • An embodiment of the present disclosure provides a trajectory prediction device, the device includes: a reference end point prediction module configured to determine the position information of the reference end point of the moving object according to the position information of the moving object; the candidate trajectory determination module is configured to The position information of the moving object and the position information of the reference end point determine a candidate trajectory set including a plurality of candidate trajectories; wherein the position information of the end points of at least two of the candidate trajectories is different from the position information of the reference end point; The target trajectory determining module is configured to determine the target trajectory of the moving object from the set of candidate trajectories.
  • an embodiment of the present disclosure provides a computer storage medium having computer-executable instructions stored thereon, and the computer-executable instructions can implement the above-mentioned method steps after being executed.
  • the embodiments of the present disclosure provide a computer device, the computer device includes a memory and a processor, the memory is stored with computer executable instructions, and the processor can implement the above-mentioned instructions when the processor runs the computer executable instructions on the memory. The method steps described.
  • the embodiments of the present disclosure provide a computer program, including computer-readable code, and when the computer-readable code is run in an electronic device, the processor in the electronic device is configured to implement the trajectory described in any one of the above items. method of prediction.
  • the embodiments of the present disclosure provide a method, device, equipment, and storage medium for trajectory prediction.
  • the reference end point of the moving object is predicted according to the position information of the current position of the moving object; then, according to the reference end point and the historical trajectory , Determine the candidate trajectory set consisting of multiple candidate trajectories of the moving object; finally, determine the target trajectory of the moving object from the candidate trajectory set; in this way, by considering the position information of the moving object, predict the reference end point of the moving object, and infer There are multiple candidate trajectories that the moving object may travel, and an optimal trajectory is selected from the multiple candidate trajectories as the target trajectory for the moving object to travel, so as to more accurately estimate the future motion trajectory of the moving object.
  • FIG. 1A is a schematic diagram of a system architecture to which a trajectory prediction method according to an embodiment of the present disclosure can be applied;
  • FIG. 1B is a schematic diagram of an implementation process of a trajectory prediction method according to an embodiment of the disclosure.
  • FIG. 1C is a schematic diagram of an implementation process of a trajectory prediction method according to an embodiment of the present disclosure
  • FIG. 2A is a schematic diagram of another implementation process of a trajectory prediction method according to an embodiment of the present disclosure
  • 2B is a schematic diagram of the implementation process of a neural network training method according to an embodiment of the disclosure.
  • 3A is a schematic diagram of the implementation structure of a candidate trajectory network according to an embodiment of the disclosure.
  • 3B is a schematic diagram of the implementation structure of a candidate trajectory network according to an embodiment of the disclosure.
  • 4A is a schematic structural diagram of candidate trajectory generation in an embodiment of the disclosure.
  • FIG. 4B is a schematic diagram of a process of generating candidate trajectories on multiple reference routes according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of the structural composition of a trajectory prediction device according to an embodiment of the disclosure.
  • FIG. 6 is a schematic diagram of the composition structure of a computer device according to an embodiment of the disclosure.
  • This embodiment proposes a method for trajectory prediction to be applied to computer equipment.
  • the computer equipment may include moving objects or non-moving objects.
  • the functions implemented by this method can be implemented by invoking the program code by the processor in the computer equipment.
  • the program code It can be stored in a computer storage medium. It can be seen that the computer device at least includes a processor and a storage medium.
  • FIG. 1A is a schematic diagram of a system architecture to which the trajectory prediction method of the embodiment of the present disclosure can be applied; as shown in FIG. 1A, the system architecture includes: a vehicle terminal 131, a network 132 and a trajectory prediction terminal 133.
  • the vehicle terminal 131 and the trajectory prediction terminal 133 can establish a communication connection through the network 132, and the vehicle terminal 131 reports location information to the trajectory prediction terminal 133 through the network 202 (or, the trajectory prediction terminal 133 automatically obtains the vehicle terminal 131).
  • the trajectory prediction terminal 133 determines the position information of the reference end point of the vehicle; then, through the position information of the vehicle and the position information of the reference end point, multiple candidate trajectories are predicted; finally, The trajectory prediction terminal 133 selects the target trajectory of the vehicle from the multiple candidate trajectories.
  • the vehicle terminal 131 may include an in-vehicle image acquisition device, and the trajectory prediction terminal 133 may include an in-vehicle vision processing device or a remote server with visual information processing capabilities.
  • the network 132 may adopt a wired connection or a wireless connection.
  • the vehicle terminal 131 in the case that the trajectory prediction terminal 133 is a vehicle-mounted visual processing device, the vehicle terminal 131 can communicate with the vehicle-mounted visual processing device through a wired connection, such as data communication through a bus; in the case of a remote server at the abnormal sitting posture recognition terminal Next, the vehicle terminal can interact with the remote server through the wireless network.
  • the vehicle terminal 131 may be a vehicle-mounted visual processing device with a vehicle-mounted image acquisition module, and is specifically implemented as a vehicle-mounted host with a camera.
  • the trajectory prediction method of the embodiment of the present disclosure may be executed by the vehicle terminal 131, and the foregoing system architecture may not include a network and a trajectory prediction terminal.
  • FIG. 1B is a schematic diagram of the implementation process of the trajectory prediction method according to the embodiment of the present disclosure, as shown in FIG. 1B, combined with the method shown in FIG. 1B for description:
  • Step S101 Determine the position information of the reference end point of the moving object according to the position information of the moving object.
  • the moving objects include: vehicles with various functions, vehicles with various rounds, etc., robots, aircraft, blind guides, smart furniture equipment or smart toys, etc. Let’s take a vehicle as an example for illustration.
  • the reference end point includes points outside the preset restriction type, where the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians. That is, the reference destination does not include points on the edge of the road, points on the road where there are obstacles, points on the road where there are pedestrians, and so on.
  • the location information of the moving object includes: time sequence location information of the moving object, or the historical trajectory of the moving object. Determining the reference end point can be achieved in the following ways.
  • the reference end point is predicted by the location information of the moving object; or, using the road coded image as the network input, combined with mobile
  • the position information of the object predicts the reference end point; then, by constraining the predicted reference end point, for example, it is realized to set a specific area, and only points that fall in the specific area can be determined as the reference end point.
  • the method of determining the reference end point in step S101 may be any method, as long as it is a method determined based on the position information of the moving object.
  • a method of determining an end point by location information and a machine learning model such as feature learning or reinforcement learning can be used.
  • the neural network can input at least the position information of the moving object and output the end point or the trajectory including the end point.
  • the end point can be determined by using the location information and environmental information around the moving object as the input of the neural network.
  • the position information of the moving object can be used as the input of the neural network, and the output of the neural network and the environmental information around the moving object can be used to determine the end point.
  • the location information and environmental information around the moving object can be used as the input of the neural network, and the output of the neural network and the environmental information around the moving object can be used to determine the reference end point.
  • the trajectory of the moving object can be determined as the output of the neural network, and the determined candidate trajectory can be adjusted based on the environmental information around the moving object so that the candidate trajectory does not overlap with pedestrians or sidewalks, and the optimized candidate trajectory is determined Reference end point.
  • a method using position information and a kinematic model of the moving object can be adopted.
  • the position information, the motion model of the moving object, and the environment information around the moving object can be used to determine the reference end point.
  • the step S101 can also be implemented in the following two ways. One is: first, sample the position information of the vehicle (for example, historical trajectory) to obtain a set of sampling points; then, use the The preset neural network performs feature extraction on the sampling set; finally, the extracted features are input into the fully connected layer of the preset neural network to obtain the reference end point.
  • the second is: sampling the location information of the vehicle, combined with the current running speed of the vehicle, can predict the reference end point of the vehicle within the preset time period.
  • the location information may be the vehicle's trajectory in a preset time period with the current time as the end of the time, for example, the current time as the end, the trajectory within 3 seconds; then, the step length is 0.3 seconds , The historical trajectory within 3 seconds is sampled, and finally, the obtained sampling point is used as a priori information to predict the reference end point of the moving object.
  • Step S102 Determine a candidate trajectory set including multiple candidate trajectories according to the position information of the moving object and the position information of the reference end point.
  • the position information of the end points of at least two of the candidate trajectories is different from the position information of the reference end point. That is to say, the candidate trajectory set includes some candidate trajectories (for example, a candidate trajectory) whose terminal position information is the same as the position information of the reference terminal, and also includes some candidate trajectories whose terminal position information is different from the position information of the reference terminal; In this way, determining multiple candidate trajectories within the tolerance range not only makes the determined multiple candidate trajectories reasonable, but also enriches the diversity of candidate trajectories, and can then filter out target trajectories from the rich candidate trajectories, and improve The accuracy of the predicted target trajectory is improved.
  • some candidate trajectories for example, a candidate trajectory
  • the above-mentioned step S102 can be implemented by the following process: First, M estimated end points are determined in a preset area including the reference end point; the M pre-estimated end points include the reference end point. Then, according to the historical running trajectories, M estimated end points, and N preset distances, M ⁇ N candidate trajectories are generated correspondingly to obtain the candidate trajectory set; wherein, the preset distance is used to indicate the historical running trajectories The distance from the midpoint of the line between the last sampling point and the end point to the candidate trajectory; M and N are both integers greater than 0.
  • the candidate trajectory set is a curve set containing multiple candidate trajectories.
  • the trajectory parameters of the candidate trajectory include: the coordinates of the estimated end point and the distance between the midpoint of the candidate trajectory and the line between the last sampling point of the historical trajectory and the estimated end point.
  • the correction value of the trajectory parameter is the correction value of the estimated coordinates and distance of the end point, and the curve shape of the candidate trajectory is adjusted based on the correction value to make the adjusted candidate trajectory more reasonable.
  • Step S103 Determine the target trajectory of the moving object from the set of candidate trajectories.
  • a target trajectory is selected from a plurality of candidate trajectories according to the confidence level and driving information of each candidate trajectory, so that the moving object travels along the target trajectory.
  • the discrete coordinate points corresponding to the future time series of output vehicles in the related art can be effectively solved, and the use of discrete coordinate points to represent the future trajectory of the vehicle is difficult to reflect the future driving trend of the vehicle, which has little effect in practical applications.
  • By predicting the reference end point of the moving object infer the multiple candidate trajectories that the moving object may travel, and select an optimal trajectory from the multiple candidate trajectories as the target trajectory of the moving object, thereby more accurately estimating the movement The future trajectory of the object.
  • the location information of the reference end point of the moving object can be predicted by combining the environment information around the moving object and the location information of the moving object, which can be achieved through the following steps:
  • the environment information of the mobile object is acquired according to the position information of the mobile object.
  • the environmental information around the historical trajectory is obtained. For example, acquiring road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, or other moving object information around the historical track; where road information includes at least: current road conditions (for example, congestion conditions) ), road width and road intersection information (for example, whether it is an intersection), etc.; obstacle information includes: whether there are roadblocks on the current road, or other obstacles, etc.; pedestrian information includes whether there are pedestrians on the road and the location of pedestrians; Traffic light information includes at least: the number of traffic lights installed on the road and whether the traffic lights are working normally; traffic sign information includes: the type and duration of the lights that the traffic lights are currently on; traffic regulation information includes at least: the current road is to the right Whether to drive on the left, one-way or two-way, the types of vehicles that the road can pass, etc.
  • obtaining the environment information of the mobile object according to the location information of the mobile object can be implemented in the following
  • Manner 1 Determine the environmental information according to the image information collected by the moving object.
  • first collect images around the historical trajectory of the moving object through the camera configured on the moving object for example, image acquisition of the environment around the moving object
  • obtain image information for example, image acquisition of the environment around the moving object
  • obtain the environment around the moving object by analyzing the image content information.
  • the road information, obstacle information, pedestrian information, traffic light information, etc. of the moving object are learned, and the position information of the possible reference destination of the moving object can be predicted through comprehensive analysis of these information.
  • the possible reference end points are excluded from the points included in the preset restriction type, so as to obtain the reference end point of the moving object.
  • Manner 2 Determine the environment information according to the communication information that characterizes the current environment received by the mobile object.
  • a mobile object uses a communication device to receive communication information that characterizes the current environment sent by other devices, and obtains environmental information by analyzing the communication information; wherein the communication information includes at least the environmental parameters of the location of the mobile device; for example, road information , Obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rules information or other moving object information, etc.
  • the location information of the reference end point of the moving object is determined according to the environment information.
  • the position information of the reference end point of the moving object can be determined through the following steps:
  • the first step is to determine the intersection information of the road where the moving object is located according to the position information of the moving object.
  • the intersection information of the road ahead that the moving object continues to run along the historical trajectory is judged, where the intersection information includes: the number of intersections, the intersection of the intersection, and so on.
  • position information of multiple reference end points of the moving object is determined.
  • the reference end point on the road corresponding to each intersection is determined respectively, that is, the position information of the possible reference end point is predicted on the road corresponding to each intersection, And the reference end points of different intersections are different.
  • the intersection of the road where the moving object is located is an intersection. First, determine the three intersections in the intersection except the direction opposite to the direction of the moving object, and then predict the position information of the reference end points on the road corresponding to the three intersections. . In this way, multiple reference end points are predicted, and then the target end point with the greatest confidence is selected from the multiple reference end points, so that the missed reference end points are reduced, and the accuracy of the determined target trajectory is improved.
  • the step S101 may be implemented through the following steps:
  • At least one reference route of the moving object is determined according to the position information of the moving object.
  • the current road conditions included in the location information of the moving object and whether it is at an intersection are input into the neural network, and multiple reference routes are predicted. For example, if the location information indicates that the moving object is on a straight one-way road, then there is one reference route, which is a route on the one-way road in the moving direction of the moving object; if the location information indicates that the moving object is at a T-shaped intersection, then there are three reference routes. Each is the route on each road in the T-junction along the moving direction of the moving object. If the location information indicates that the moving object is at an intersection, then there are four reference routes, which are the routes on each road in the intersection along the moving direction of the moving object. In this way, combined with the position information of the moving object, a number of reference routes that the moving object may travel may be predicted by comprehensive consideration.
  • the second step is to determine the location information of the reference destination according to the reference route.
  • a most likely future driving route of the moving object is determined, and on the reference route, the position information of the reference end point of the moving object is determined.
  • the roadblock information and road edges of the reference route For example, to determine the roadblock information and road edges of the reference route.
  • determine the obstacles on each reference route such as pedestrians, faulty vehicles, or roadblocks.
  • the drivable area of the reference route is determined according to the roadblock information and the road edge of the reference route.
  • the drivable area on the reference route is delineated.
  • the area within the road edge of the reference route and without roadblocks on the road is regarded as the drivable area, so Improved the effectiveness of the drivable area on each reference route.
  • the position information of the reference end point of the moving object in the drivable area is determined.
  • it may be based on the historical trajectory of the moving object to predict the reference end point of the moving object in the drivable area of the reference route.
  • the reference end point of the reference road in the drivable area is predicted based on the historical trajectory of the moving object.
  • the above provides a way to predict the end point of the running trajectory.
  • the network predicts the end point of the trajectory, it generates candidate trajectories based on the end point of the first step.
  • Each candidate trajectory corresponds to the end point, and the end point of the candidate trajectory cannot Beyond the road, and not in places with obstacles (such as pedestrians), the effectiveness of the predicted reference end point is improved.
  • a set of candidate trajectories on the reference route is determined.
  • the candidate trajectory set on the reference route is obtained; in this way, the candidate trajectory set on each reference route is obtained, and the candidate trajectory set on each reference route is obtained from the at least one reference From the set of candidate trajectories on the route, the target trajectory of the moving object is determined.
  • the candidate trajectory that finally meets the constraint conditions is determined as the most likely driving trajectory of the moving object, that is, the target driving trajectory.
  • the embodiment of the present disclosure provides a trajectory prediction method, which is applied to a moving object, such as a vehicle, as an example.
  • a moving object such as a vehicle
  • the method shown in Figure 1C is explained:
  • Step S111 Predict the reference end point of the moving object according to the historical trajectory of the moving object.
  • Step S112 Determine M estimated end points within the preset area of the included reference end points.
  • M estimated end points are determined within a preset area of the reference route that includes the reference end point.
  • the preset area is the area around the reference end point, for example, a square with a side length of 100m with the reference end point as the center, and then the square is divided into a plurality of square grids according to a step length of 5m.
  • the center of the grid is the estimated end point.
  • the area of the road surface within the road edge of the reference route that contains the reference end point is taken as the preset area; in a specific example , The road width is 4 meters, and the area including the reference end point of 4 meters in width and 100 meters in length is taken as the preset area; then, the preset area of the reference end point is divided into M grids of predetermined size, The centers of the M grids are used as the M estimated end points. For example, using grids of the same size, set the grid size to 10 cm. In this way, taking the centers of the M grids as the M estimated end points, that is, the possible end points of the candidate trajectory, the possible end points of the candidate trajectory on each reference road are obtained.
  • Step S113 According to the position information of the moving object, the M estimated end points and N preset distances, M ⁇ N candidate trajectories are correspondingly generated to obtain the candidate trajectory set.
  • the preset distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object to the candidate trajectory; where M and N is an integer greater than zero.
  • firstly determine the midpoint between the last sampling point in the position information of the moving object and the reference end point; secondly, determine the midpoint based on the N preset distances and the midpoint N pre-estimated points; the estimated point is a point on the candidate trajectory, because the preset distance is the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object (for example, historical trajectory)
  • N preset distances correspond to N Pre-estimation points.
  • M ⁇ N candidate trajectories are generated; that is, based on N pre-estimated points and one estimated end point, N candidate trajectories can be fitted by fitting, Then based on N pre-estimated points and M estimated end points, M ⁇ N candidate trajectories can be fitted.
  • the M ⁇ N candidate trajectories are screened to obtain the candidate trajectory set. Among them, these environmental information can be obtained from the image. For example, if an obstacle is detected in the image, the candidate trajectory cannot pass through the obstacle. You can also get road information in the image, or use it when setting candidate trajectories.
  • candidate trajectories When generating candidate trajectories, consider surrounding environment information, set constraint conditions, and eliminate trajectories that do not meet the constraint conditions among the M ⁇ N candidate trajectories to obtain a set of candidate trajectories; for example, eliminate candidate trajectories that pass through obstacles.
  • the above steps S112 and S113 give a way to "determine a set of candidate trajectories consisting of multiple candidate trajectories of the moving object". In this way, multiple possible candidates are determined around the reference end point. The estimated end point of the trajectory end point, and then based on the estimated end point and the preset distance, multiple selected trajectories are obtained by fitting. In this way, the use of curve representation to predict the target trajectory of the vehicle can reflect the trend of the trajectory and is more robust to noise. Strong scalability.
  • Step S114 Determine the trajectory parameter correction value of at least one candidate trajectory in the candidate trajectory set.
  • the step S114 can output the trajectory parameter correction value of each candidate trajectory based on the trained neural network, and can also use but not limited to the neural network trained by the training method mentioned below to output the trajectory parameter.
  • the trajectory parameters may include parameters used to describe the trajectory curve.
  • the trajectory parameters may include, but are not limited to, the coordinates of the end points of the trajectory curve, and/or the connection between the midpoint of the trajectory curve and the two end points of the trajectory curve. The distance between the lines and so on.
  • the candidate trajectory is adjusted according to the trajectory parameter correction value to improve the rationality of the obtained target trajectory.
  • the correction value may include, but is not limited to: the adjustment value of the coordinates of the endpoint of the trajectory curve, and/or the trajectory The adjustment value of the distance between the midpoint of the curve and the line connecting the two end points of the trajectory curve.
  • the trajectory parameter correction value may be determined by a neural network trained in an embodiment of the present disclosure, or may be determined by a neural network trained in other ways.
  • Step S115 Adjust the candidate trajectories in the candidate trajectory set according to the trajectory parameter correction value to obtain an updated candidate trajectory set.
  • each candidate in the set of candidate trajectories is modified to obtain multiple modified candidate trajectories, that is, an updated set of candidate trajectories.
  • the candidate trajectory is corrected based on the correction value output by the trained neural network, thereby improving the accuracy of the candidate trajectory in the updated candidate set.
  • Step S116 Determine the target trajectory of the moving object from the candidate trajectory set according to the driving information of the moving object and the confidence level.
  • the target trajectory is filtered from the updated candidate trajectory set and the driving information of the moving object at least includes road surface information of the moving object and/or control information of the moving object; for example,
  • the road surface information includes: the width of the road surface, the edge of the road surface and the center line on the road surface, etc.
  • the control information of the moving object includes: driving direction, driving speed, and state of vehicle lights (for example, the state of turn signals) and so on.
  • the predicted drivable area of the moving object is determined; the drivable area is shown in FIG. 3A, the drivable area 46 of the vehicle.
  • the road surface information includes at least whether the road is in the same direction, the width of the road surface, and the intersection of the road surface; for example, the road surface information indicates that the road section is in the same direction and is not an intersection, then the predicted maximum travelable area is this The area in front of the vehicle that covers the entire road, that is, the area is one-way; if the road surface information indicates that the road section is an intersection, then the predicted drivable area is the area around the vehicle that covers the entire road, that is, the area includes the intersection Three directions (turn left, go straight and turn right).
  • the road surface information indicates that the road section is an intersection
  • the preset drivable area includes the three directions of the intersection (turn left, go straight and turn right)
  • the control information indicates that the vehicle wants to turn left
  • predict the coverage that can be driven The area can be reduced from covering the three directions (turning left, going straight and turning right) to only covering the direction of turning left, so that the coverage area of the drivable area can be further accurate, and the final target trajectory of the vehicle can be determined more accurately.
  • the second step is to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the drivable area and the confidence level.
  • the candidate trajectories included in the drivable area in the updated candidate trajectory set to obtain the to-be-determined target trajectory set; then, the confidence in the to-be-determined target trajectory set is greater than a preset confidence threshold
  • the trajectory of is determined as the target trajectory; for example, the candidate trajectory with the greatest confidence in the determined target trajectory set is used as the target trajectory, which fully improves the accuracy of the predicted target trajectory of the vehicle.
  • the historical trajectory is first predicted to the candidate trajectory set, and then further narrowing down the drivable area to which the candidate trajectory should belong according to the control information and the road surface, so as to achieve the purpose of screening candidate trajectories.
  • step S116 can be implemented through the following steps:
  • Step S161 Determine the confidence level of the candidate trajectory in the candidate trajectory set.
  • the confidence level is used to indicate the probability that the candidate trajectory is the target trajectory.
  • the confidence level may be determined by a neural network trained in an embodiment of the present disclosure, or may be determined by a neural network trained in other ways.
  • Step S162 Determine the target trajectory of the moving object from the candidate trajectory set according to the driving information of the moving object and the confidence level.
  • one or a combination of the road surface information of the moving object and the control information of the moving object or a combination of the two is used as a priori information to modify the candidate trajectory, so that the final target trajectory is more reasonable .
  • the control information of the moving object may include but is not limited to at least one of the following, the running state of the engine, steering wheel steering information, or speed control information (such as deceleration, acceleration, or braking).
  • the step S162 can be implemented through the following two steps:
  • the first step is to determine the travelable area (Freespace) of the mobile object according to the environmental information of the mobile object and/or the control information of the mobile object.
  • Freespace the travelable area
  • the environmental information of the moving object may be road surface information
  • determining the drivable area of the moving object includes the following multiple methods: Method 1: Determine the drivable area of the moving object according to the road surface information of the road where the moving object is located; Method 2. : Determine the drivable area of the mobile object according to the control information of the mobile object; Method 3: Determine the drivable area of the mobile object according to the road surface information of the road where the mobile object is located and/or the control information of the mobile object area.
  • the road surface information refers to the road surface information on which the vehicle is currently running
  • the control information refers to the condition of the vehicle lights at the time corresponding to the collection of the historical trajectory of the vehicle.
  • the vehicle lights indicate a right turn
  • the control information is a right turn, so as to determine that the vehicle's drivable area is the road area corresponding to the right turn.
  • the drivable area may be understood as an area for a moving object to travel, for example, a barrier-free and allowable road area.
  • the predicted drivable area of the moving object is determined.
  • the drivable area is shown in FIG. 3A, the drivable area 46 of the vehicle.
  • the road surface information may include, but is not limited to, at least one of the following: whether the road surface is the same lane, the width of the road surface, and the intersection of the road surface; for example, the road surface information indicates that the road section is the same lane and not an intersection, then
  • the maximum predicted drivable area is the area in front of the vehicle that covers the entire road, that is, the area is one-way; if the road surface information indicates that the road section is an intersection, the maximum predicted drivable area is the area around the vehicle that covers the entire road, namely This area is the three directions (turn left, go straight and turn right) containing the intersection.
  • the predicted drivable area is adjusted to obtain the drivable area; for example, the surface information indicates that the road section is an intersection, and the preset drivable area is the three that includes the intersection. Three directions (turn left, go straight and turn right), but the control information indicates that the vehicle is going to turn left, then the predicted coverage area can be reduced from covering the three directions (turn left, go straight and turn right) to cover only the left turn Direction, so that the coverage area of the drivable area can be further accurate, so that the final target trajectory of the vehicle can be determined more accurately.
  • the second step is to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the drivable area and the confidence level.
  • the candidate trajectories in the updated candidate trajectory set are filtered to obtain the target trajectory.
  • the trajectory of is determined as the target trajectory; for example, the candidate trajectory with the greatest confidence in the determined target trajectory set is used as the target trajectory, which fully improves the accuracy of the predicted target trajectory of the vehicle.
  • the current time as the end of the time, and obtain the trajectory of the vehicle within a preset period of time as the historical trajectory, for example, the trajectory within 3 seconds;
  • the heading of the car lights is used as a priori information to predict the target trajectory of the vehicle in a preset time period in the future, for example, predict the trajectory of the vehicle in the next 3 seconds; in this way, it can provide a highly accurate future driving for autonomous vehicles Trajectory.
  • the drivable area of the vehicle is reduced by using the control information of the vehicle, and the candidate trajectory with the highest confidence among the candidate trajectories contained in the drivable area is used as the target trajectory of the vehicle, thereby making the prediction
  • the result is more reliable, and the safety of practical applications is improved.
  • the embodiment of the present disclosure provides a trajectory prediction method.
  • a trained neural network may be used to predict the reference end point of a moving object in step S101, as shown in FIG. 2A, which is an embodiment of the trajectory prediction method of the present disclosure.
  • FIG. 2A is an embodiment of the trajectory prediction method of the present disclosure.
  • Step S201 Predict the candidate end point of the moving object based on the location information of the moving object through a neural network.
  • the neural network is a trained neural network, which can be obtained by training in the following multiple ways:
  • Manner 1 First, input the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain the first predicted end point.
  • the position information of the moving object is used as the input of the neural network to predict the first predicted end point; or the position information of the moving object and the road image collected by the moving object are used as the input of the neural network to predict the first predicted end point.
  • the first prediction loss of the neural network with respect to the first prediction end point is determined.
  • the neural network input the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain multiple candidate trajectories, and then estimate each The rough confidence of the candidate is then combined with the true value trajectory to determine the accuracy of each trajectory in the candidate trajectory set, and this accuracy is fed back to the neural network so that the neural network can adjust network parameters such as weight parameters to improve the neural network.
  • the accuracy of network classification is
  • the neural network to perform operations such as convolution and deconvolution to obtain the confidence of these 100 candidate trajectories; since the parameters of the neural network are initialized randomly during the training phase, this leads to the 100 The confidence of the rough estimate of the candidate trajectories is also random. If you want to improve the accuracy of the candidate trajectories predicted by the neural network, you need to tell the neural network which of the 100 candidate trajectories are right and wrong. Based on this, the comparison function is used to compare 100 candidate trajectories with the true value trajectory.
  • the comparison function will output 100 The comparison value (0,1) value; next, input these 100 comparison values into the neural network, so that the neural network uses the loss function to supervise the candidate trajectory, so that the candidate trajectory with a comparison value of 1 is increased
  • the confidence of the candidate trajectory is reduced; in this way, the confidence of each candidate trajectory is obtained, that is, the classification result of the candidate trajectory is obtained.
  • the trajectory prediction loss corresponding to the classification result is used to adjust the weight parameters of the neural network.
  • the network parameters of the neural network are adjusted to train the neural network.
  • the weight parameter is the weight of the neuron in the neural network.
  • the first prediction loss is a cross-entropy loss of a first-type candidate trajectory sample (for example, a positive sample) and the second-type candidate trajectory sample (for example, a negative sample).
  • the prediction loss is used to adjust parameters such as the weight of the neural network, so that the adjusted neural network classification result is more accurate.
  • Manner 2 First, input the location information of the moving object and the map information corresponding to the location information into the neural network to obtain a second predicted end point.
  • the map information includes at least the geographic location of the current road, road width, road edge, and roadblock information.
  • the second prediction loss of the neural network with respect to the second prediction end point is determined.
  • the true value trajectory is compared with the second predicted end point to determine the second predicted loss of the neural network with respect to the second predicted end point.
  • the deviation between the second predicted end point and the preset constraint condition is determined.
  • the preset constraint conditions include the area where the predicted end point can exist in the road, for example, the area on the road excluding road edge points, obstacles, and pedestrians; for example, determining the second predicted end point and the area that can exist Predict the deviation between the regions of the end point.
  • the second prediction loss of the second prediction endpoint is adjusted to obtain the third prediction loss.
  • the deviation when the deviation is relatively large, it means that the second prediction end point is seriously deviated from the area where the prediction end point can exist, and the second prediction loss is appropriately adjusted to adjust the network parameters of the neural network.
  • the network parameters of the neural network are adjusted to train the neural network.
  • the above method 1 and method 2 are the training process of the neural network. Based on the position information of the moving image and the prediction loss, multiple iterations are performed to make the trajectory prediction loss of the candidate trajectory output by the trained neural network meet the convergence condition, so that The target trajectory output by the neural network is more accurate.
  • Step S202 Determine the position information of the reference end point of the moving object according to the candidate end point.
  • the position information of the reference end point is determined according to the candidate end point output by the neural network, or the position information of the reference end point is determined by combining the candidate end point output by the neural network and the environmental information.
  • step S201 and step S202 can be implemented in two ways:
  • Manner 1 First, input the position information of the moving object into a first neural network to predict the first candidate end point of the moving object.
  • the position information of the reference end point of the moving object is determined.
  • the first candidate destination is combined with environmental information such as road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information of the moving object to perform a comprehensive analysis to predict
  • environmental information such as road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information of the moving object to perform a comprehensive analysis to predict
  • the arrived first candidate end point that overlaps with the pedestrian or sidewalk or exceeds the edge of the road is adjusted to obtain the position information of the reference end point with higher accuracy.
  • Manner 2 First, input the location information and environment information of the moving object into a second neural network to predict the second candidate end point of the moving object.
  • the historical trajectory of the moving object, road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information are used as the input of the second neural network to predict the first position of the moving object. 2.
  • the position information of the reference end point of the moving object is determined.
  • the predicted second candidate end point is in the drivable area based on the environmental information.
  • the neural network is trained using the truth-value trajectory, the candidate trajectory set, and the prediction loss, so that the trained neural network can output a target trajectory that is closer to the truth-value trajectory, so that it can be better applied to mobile
  • the prediction of the future target trajectory of the object, and the accuracy of the predicted target trajectory is improved.
  • FIG. 2B is a schematic diagram of the implementation process of the neural network training method of the embodiment of the present disclosure. As shown in FIG. 2B, the following description will be made in conjunction with FIG. 2B:
  • Step S211 Determine the reference end point of the moving object according to the acquired position information of the moving object.
  • the reference end point of the moving object is determined.
  • Step S212 Determine M estimated end points in the preset area including the reference end points.
  • M estimated end points are determined within a preset area of the reference route that includes the reference end point. First, according to the width of each reference route, determine the preset area of the reference end point of each reference route; then, divide the preset area of the reference end point of each reference route into M equal sizes Grids, using the centers of M grids as the M estimated end points.
  • Step S213 According to the historical trajectory, the M estimated end points and N preset distances, correspondingly generate M ⁇ N candidate trajectories to obtain the candidate trajectory set.
  • the preset distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the historical trajectory to the candidate trajectory; wherein, M and N are both greater than 0 Integer.
  • M and N are both greater than 0 Integer.
  • firstly determine the midpoint between the last sampling point in the historical trajectory and the reference end point; secondly, determine N preset distances and the midpoint according to the N preset distances and the midpoint.
  • Estimated point; the estimated point is a point on the candidate trajectory.
  • the preset distance is the distance between the midpoint of the last sampling point in the historical trajectory and the reference end point to the candidate trajectory. Therefore, N preset distances correspond to N pre-estimated points.
  • N candidate trajectories can be fitted by fitting, and then based on N pre-estimated points and M estimated end points, M ⁇ N candidate trajectories can be fitted by fitting.
  • Step S214 Determine the average distance between each candidate trajectory in the candidate trajectory set and the true value trajectory.
  • the distance between each candidate trajectory and the true value trajectory is first determined, and then the multiple distances obtained are averaged.
  • Step S215 Determine the candidate trajectory whose average distance is less than the preset distance threshold as the first-type candidate trajectory sample.
  • the candidate trajectory whose average distance is less than the preset distance threshold indicates that the gap between the candidate trajectory and the true value trajectory is small.
  • the first type of candidate trajectory samples can also be understood as the output value of the comparison function.
  • Step S216 Determine at least a part of candidate trajectories in the candidate trajectory set except for the first-type candidate trajectory samples as second-type candidate trajectory samples.
  • first, all or part of the candidate trajectories in the candidate trajectory set except for the first-type candidate trajectory samples are determined to be the second-type candidate trajectory samples.
  • the number of candidate trajectory samples of the second type is determined according to the ratio of 3:1 between the candidate trajectory samples of the second type and the candidate trajectory samples of the first type.
  • the first type of candidate trajectory samples are closer to the true value trajectory than the second type of candidate trajectory samples. From a certain perspective, it can be understood that the first type of candidate trajectory samples are more reliable than the second type of candidate trajectory samples. letter.
  • the ratio of the second-type candidate trajectory samples to the first-type candidate trajectory samples is set to 3:1, which reduces the excessive number of second-type candidate trajectory samples, and has a dominant effect on the trajectory prediction loss corresponding to the classification result, thus This leads to unsatisfactory training results for the neural network.
  • steps S214 to S216 give a way to compare the true value trajectory of the moving object with the candidate trajectory to determine the classification result of the candidate trajectory, in which the first type of candidate trajectory is determined.
  • the samples and the second type of candidate trajectory samples complete the process of classifying candidate trajectories.
  • both the true trajectory and the candidate trajectory are input to the comparison function. If the similarity between the candidate trajectory and the true trajectory is greater than the preset similarity threshold, the comparison function outputs 1; otherwise, the comparison function outputs 0. In this way, the accuracy of the classification is further improved.
  • Step S217 Determine the cross entropy loss of the first type candidate trajectory sample and the second type candidate trajectory sample, where the cross entropy loss is the trajectory prediction loss.
  • step S2128 the trajectory prediction loss corresponding to the classification result is used to adjust the network parameters in the neural network to train the neural network.
  • a larger number of data set trajectories are used to train the neural network. Since the data set contains complex urban scenes, the data is collected from the perspective of an autonomous vehicle, which is closer to practical applications.
  • the neural network trained on the data set is suitable for trajectory prediction in various scenarios, so that the trained neural network predicts the target trajectory with higher accuracy.
  • step S213 the method further includes:
  • Step S231 using the neural network to determine the trajectory parameter adjustment value of the candidate trajectory.
  • the adjustment value may be a prediction deviation between the candidate trajectory predicted by the neural network and the true value trajectory.
  • Step S232 Determine the deviation between the candidate trajectory and the true value trajectory.
  • the deviation may be the true difference between the end point coordinates of the true value trajectory and the end point coordinates of the candidate trajectory.
  • Step S233 Determine the adjusted forecast loss according to the deviation and the adjustment value.
  • Step S234 using the adjusted loss to adjust the weight parameter of the preset neural network, so that the adjusted prediction loss of the preset neural network output meets the convergence condition.
  • the adjustment loss is Euclidean distance loss, and based on the Euclidean distance loss, the weight parameter of the neural network is adjusted, so that the gap between the candidate trajectory and the true value trajectory is smaller.
  • a knowledge candidate trajectory network which integrates prior knowledge into vehicle trajectory prediction.
  • the vehicle trajectory is modeled as a continuous curve parameterized by the end point and the distance parameter r.
  • the vehicle trajectory model is robust to noise and provides a more flexible way to integrate prior knowledge into the trajectory prediction; then , Formulate vehicle trajectory prediction as a task of candidate trajectory generation and refinement.
  • the classification module selects For the best candidate trajectory, the refinement module performs trajectory regression and predicts the end point) to generate the final trajectory prediction. In this way, movement and intention can be reflected more naturally, and the anti-noise performance is stronger, and the prior information can be more flexibly integrated into the learning channel.
  • the embodiments of the present disclosure provide a large-scale vehicle trajectory data set and new evaluation criteria in order to evaluate the proposed method and better promote vehicle prediction research in autonomous driving.
  • This new data set contains millions of vehicle trajectories in complex urban driving scenes, and each vehicle has richer information, such as vehicle control information and/or road structure information for at least some vehicles.
  • experiments of different time lengths are performed and the fitting error of the trajectory of length T is calculated.
  • the time T is set to 6 seconds.
  • the fitted trajectory obtained after cubic curve fitting is performed on the predicted point is the candidate trajectory.
  • the embodiment of the present disclosure uses two control points, namely the end point and the preset distance ⁇ , and the sampling points on the historical trajectory to represent the curve.
  • FIG 3A is a schematic diagram to realize the network structure of the candidate trajectory embodiment disclosed embodiment, shown in Figure 3A, vehicle 41 acquired position information p in, the control information l in, d in the vehicle and orientation information road surface information (for example, limit the line width of the road In peacetime traffic jams, etc.), this information is the detection result of the automatic driving system, except for road information, is the historical time period information of the vehicle to be predicted.
  • the road information is the map information around the vehicle to be predicted at the current moment.
  • the basic feature 42 is generated by the basic feature encoding module, and based on these basic features, the future end point is predicted, and the reference end point is obtained.
  • a set of cubic fitting curves 43 are obtained as candidate trajectories; then, the road surface information and the lamp states 420 of other vehicles on the road are used as constraints to constrain the generated candidate trajectories, and obtain The candidate trajectory 44 (that is, a candidate trajectory set including a plurality of candidate trajectories).
  • the candidate trajectory 44 (that is, a candidate trajectory set including a plurality of candidate trajectories).
  • the candidate trajectories 44 are classified to obtain a classification result 45, where the first type of candidate trajectory samples and the second type of candidate trajectory samples are used to generate candidate trajectory features using convolutional layers.
  • the classification module determines the drivable area 46 of the vehicle according to the basic function and the candidate trajectory function.
  • FIG. 3B is a schematic diagram of the implementation structure of the candidate trajectory network of the embodiment of the disclosure. As shown in FIG. 3B, the whole process is divided into two stages. In the first stage 81, in the basic feature encoding module 808, the historical trajectory P obs 801 And the surrounding road information r Tobs 803 is input into the coding network CNN802, and the rough predicted end point 82 is output.
  • the candidate trajectory correction module 811 of the second stage 84 input the candidate trajectory 83 into the network for classification (CNN-ED) 85, perform classification 86 and correction 87, output the maximum confidence level of 88, and obtain the predicted position of the vehicle 814 , Based on these predicted positions, the final running trajectory can be generated.
  • the drivable area is delineated, and the candidate trajectories outside the drivable area are eliminated.
  • Candidate trajectories outside the driving area will be eliminated. It can be seen from Figure 3B that the actual future position 815 of the vehicle driving in the next few minutes is basically consistent with the predicted position 814.
  • the prediction method based on deep learning can be more explanatory And flexibility.
  • the second stage of the knowledge candidate trajectory network can simplify the prediction problem by selecting the most reasonable trajectory.
  • the basic feature encoding module is designed as an encoder-decoder network.
  • the network takes (p, l, d, r) as input in the time interval [0, Tobs], where p represents position and l represents control Information, d represents the direction of the car, r represents local road information.
  • the attributes of the vehicle (l, d) are obtained through a model based on Deep Neural Networks (DNN).
  • DNN Deep Neural Networks
  • p t (x t , y t ) is the coordinate of the vehicle
  • l t (b lt , l tt , r tt ) represents the brake light, left turn light and right turn light, and They are binary values
  • d t (dx t , dy t ) is a unit vector.
  • the encoder and decoder are composed of several convolution and deconvolution layers, respectively.
  • the embodiment of the present application first, basic features are used to predict a rough end point to reduce the search space of candidate trajectories. Then, two steps are taken to generate candidate trajectories.
  • the present embodiment of the present disclosure may in fact be a set of cubic curve fit.
  • the distance between the midpoint of the candidate trajectory and the connection line between the last input point and the end point, which is used to control the degree of curvature of the curve.
  • Point 51 represents the last sampling point on the historical trajectory
  • point 52 represents the reference end point predicted based on the historical trajectory
  • points 53 and 54 respectively represent the The center of the grid divided in the preset area of point 52 is the estimated end point
  • is the distance between the midpoint of the line between point 51 and point 52 and the candidate trajectory, and the size of ⁇ is set in advance (for example, set to (-2m, 2m)), in this way, based on the value of ⁇ , the estimated end point, and the last sampling point on the historical trajectory, multiple candidate trajectories with different degrees of curvature (that is, candidates including multiple candidate trajectories) can be determined Trajectory collection).
  • FIG. 4B is a schematic diagram of the process of generating candidate trajectories on multiple reference routes according to an embodiment of the present disclosure.
  • the formality is weak. Since the road has strict constraints on vehicles, the multi-mode candidate trajectory generation will use road information to generate multiple end points. It can be seen from Fig. 4B that the current position of the vehicle is an intersection. According to the basic information of the road information (for example, lane line 901 (reference line on the road in Fig. 4B) and running direction, etc.) and the historical trajectory 902 of the vehicle, A set of reference routes 91 (located on each road at the intersection, for example, reference lines 904, 905, and 906), these reference routes represent the center lane lines that vehicles may reach. Therefore, formula (2) can be extended to generate multiple candidate trajectory sets for different reference routes.
  • f () represents a cubic polynomial function, p 'ep ⁇ p ep and ⁇ [-2, -1,0,1,2].
  • a binary class label indicating a good trajectory or not, is assigned to each candidate trajectory.
  • the average distance between the points on the uniformly sampled true-value trajectory and the candidate trajectory is defined as the criterion of the candidate trajectory, as shown in formula (3):
  • N is the number of sampling points, with They are the i-th sampling point of the true value trajectory and the candidate trajectory respectively.
  • the candidate trajectory whose AD value is lower than the preset threshold is determined as a positive sample.
  • the preset threshold is 2m
  • the candidate trajectory whose average distance between the candidate trajectory and the true value trajectory is less than 2m is determined as a positive sample.
  • the sample indicates that the gap between the candidate trajectory and the true value trajectory is smaller and closer to the true value trajectory.
  • the remaining candidate trajectories are potential negative samples.
  • the embodiment of the present disclosure adopts a uniform sampling method to keep the ratio between the negative samples and the positive samples at 3:1.
  • the embodiment of the present disclosure adopts parameterization of 2 coordinates and 1 variable, as shown in formula (4):
  • t x , t y and t ⁇ are supervised information.
  • the embodiment of the present disclosure defines the multi-task loss function minimization as shown in formula (5):
  • L cls represents the loss function of the two types of samples.
  • the cross entropy loss of the two types of samples is used as L cls ;
  • L ref represents the loss function for modifying the positive trajectory parameter, and the embodiment of the present disclosure uses Euclidean loss As L ref . Due to the multi-modal characteristics of the trajectory, the embodiments of the present disclosure use positive samples and partially randomly sampled negative samples to calculate the refinement loss function, and use ⁇ to control the ratio of sampling negative samples.
  • the future trajectory of a vehicle is not only affected by history, but also restricted by rules such as road structure and control information. Combining these rules can make a more reliable prediction of the future trajectory of the target.
  • the knowledge candidate trajectory network of the embodiments of the present disclosure can effectively solve these problems and obtain a very reliable predicted trajectory.
  • the embodiments of the present disclosure combine historical running trajectories and high-definition maps to determine a polygonal area composed of lane lines on which vehicles can travel in the future, that is, a driving area.
  • the basic rule for determining the drivable area is that the vehicle can only travel on lanes in the same direction.
  • the polygonal area is the destination lane, otherwise the drivable area consists of all possible lanes.
  • the embodiments of the present disclosure propose two methods for implementing road constraints, such as a method of not ignoring candidate trajectories outside the drivable area and a method of ignoring candidate trajectories outside the drivable area.
  • the method of not ignoring candidate trajectories outside the drivable area takes the drivable area as an input function and implicitly supervises the model to learn such rules.
  • the embodiments of the present disclosure propose to ignore candidate trajectories outside the drivable area to clearly restrict the candidate trajectories in inference. For ignoring candidate trajectories outside the drivable area, the candidate trajectories outside the drivable area are ignored during generation. In addition, the embodiment of the present disclosure constrains the candidate trajectory by attenuating the classification score of the candidate trajectory outside the drivable area during the test, as shown in formula (6):
  • r represents the probability that the candidate trajectory points outside the drivable area
  • represents the attenuation factor
  • Control information is a clear signal that instructs the vehicle to pay attention. Similar to road constraints, the control information will restrict the drivable area in a certain direction, so that it can be used to further narrow the drivable area generated by road constraints. For vehicles at an intersection, the drivable area is fully opened to four directions.
  • the embodiment of the present disclosure uses the hint of the turn signal to select a unique road as a drivable mask, so that the drivable mask can be reduced. For vehicles in the lane, the embodiment of the present disclosure may also choose to attenuate the score of the corresponding candidate trajectory during the test, as shown in Equation 6.
  • the embodiments of the present disclosure redefine the trajectory of the vehicle, thereby enabling reliable vehicle motion prediction; this well reflects the trend and intention of vehicle motion, and is robust to noise; and a large amount of information-rich data sets are collected, Experiments on the data set have proved the effectiveness of the method proposed in the embodiments of the present disclosure. At the same time, more standardized rules, such as traffic lights, can be easily extended to the solutions of the embodiments of the present disclosure.
  • the technical solutions of the embodiments of the present disclosure essentially or the parts that contribute to the prior art can be embodied in the driving of a software product.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a terminal, a server, etc.) executes all or part of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present disclosure are not limited to any specific combination of hardware and software.
  • FIG. 5 is a schematic diagram of the structure composition of the trajectory prediction device of the embodiment of the disclosure. As shown in FIG. 5, the device 500 includes:
  • the reference end point prediction module 501 is configured to determine the position information of the reference end point of the moving object according to the position information of the moving object;
  • any method may be adopted, and the method may be determined based on the position information of the moving object.
  • a method of determining an end point using location information and a machine learning model such as feature learning or reinforcement learning can be adopted.
  • the machine learning model can input at least the position information of the moving object and output the end point or the trajectory including the end point.
  • the reference end point prediction module 501 can determine the end point by using the location information and environment information around the moving object as the input of the machine learning model.
  • the reference end point prediction module 501 may use the position information of the moving object as the input of the machine learning model, and use the output of the machine learning model and the environmental information around the moving object to determine the end point.
  • the reference end point prediction module 501 may use the location information and environment information around the moving object as the input of the machine learning model, and use the output of the machine learning model and the environment information around the moving object to determine the end point. For example, the reference destination prediction module 501 may determine the trajectory of the moving object as the output of the machine learning model, adjust the determined trajectory based on the environmental information around the moving object, so that the trajectory does not overlap with pedestrians or sidewalks, and determine the adjusted trajectory End point contained in.
  • a method using position information and a kinematic model (kinematic model) of the moving object can be adopted.
  • the reference end point prediction module 501 can use the position information, the motion model of the moving object, and the environment information around the moving object to determine the end point.
  • the candidate trajectory determination module 502 is configured to determine a candidate trajectory set including a plurality of candidate trajectories according to the position information of the moving object and the position information of the reference end point; wherein, the position information of the end point of each candidate trajectory is consistent with the position information of the reference end point.
  • the position information of the reference end point is different;
  • the target trajectory determining module 503 is configured to determine the target trajectory of the moving object from the set of candidate trajectories.
  • the position information of the moving object includes: time series position information of the moving object, or the historical trajectory of the moving object.
  • the reference end point includes points other than a preset restriction type, where the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians.
  • the reference end point prediction module 501 includes:
  • the environmental information acquisition submodule is configured to acquire environmental information of the mobile object according to the position information of the mobile object, the environmental information including at least one of the following: road information, obstacle information, pedestrian information, traffic light information, traffic Identification information, traffic regulation information, and other moving object information.
  • the first reference end point prediction sub-module is configured to determine the position information of the reference end point of the moving object according to the environmental information.
  • the environmental information acquiring submodule is further configured to:
  • the environment information is determined according to the communication information that characterizes the current environment received by the mobile object.
  • the reference end point prediction module 501 includes:
  • the reference route determination submodule is configured to determine at least one reference route of the moving object according to the position information of the moving object;
  • the second reference destination prediction sub-module is configured to determine the location information of the reference destination according to the reference route.
  • the second reference endpoint prediction sub-module includes:
  • the drivable area determining unit is configured to determine the drivable area of the reference route
  • the reference end point predicting unit is configured to determine the position information of the reference end point of the moving object in the drivable area according to the position information of the moving object.
  • the reference end point prediction module 501 includes:
  • intersection determination submodule configured to determine intersection information of the road where the mobile object is located according to the position information of the mobile object
  • the multi-reference end point determination sub-module is configured to determine the position information of multiple reference end points of the moving object in response to the intersection information indicating that there are at least two intersections; wherein the reference end points of different intersections are different.
  • the candidate trajectory determination module 503 includes:
  • a confidence determination submodule configured to determine the confidence of the candidate trajectory in the candidate trajectory set
  • the target trajectory determination submodule is configured to determine the target trajectory of the moving object from the set of candidate trajectories according to the driving information of the moving object and the confidence level.
  • the device further includes:
  • a correction value determining module configured to determine a trajectory parameter correction value of at least one candidate trajectory in the candidate trajectory set
  • a trajectory adjustment module configured to adjust candidate trajectories in the candidate trajectory set according to the trajectory parameter correction value to update the candidate trajectory set;
  • the updated target trajectory determining module is configured to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the driving information of the moving object and the confidence level.
  • the updated target trajectory determination module includes:
  • the drivable area determination sub-module is configured to determine the drivable area of the mobile object according to the environmental information of the mobile object and/or the control information of the mobile object;
  • the updated target trajectory determination submodule is configured to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the drivable area and the confidence level.
  • the drivable area determination sub-module includes:
  • a predicted drivable area determining unit configured to determine the predicted drivable area of the mobile object according to the environmental information of the mobile object
  • the predictable drivable area whole unit is configured to adjust the predicted drivable area according to the control information of the moving object to obtain the drivable area.
  • the updated target trajectory determination sub-module includes:
  • a target trajectory set determining unit configured to determine candidate trajectories included in the drivable area in the updated candidate trajectory set to obtain the to-be-determined target trajectory set;
  • the target trajectory screening unit is configured to determine the trajectory with the greatest confidence in the set of target trajectories to be determined or the trajectory with the confidence greater than a preset confidence threshold as the target trajectory.
  • the candidate trajectory determining module 502 includes:
  • the estimated end point determination sub-module is configured to determine M estimated end points in a preset area containing the reference end point
  • the candidate trajectory generation sub-module is configured to generate M ⁇ N candidate trajectories corresponding to the position information of the moving object, the M estimated end points, and N preset distances to obtain the candidate trajectory set; wherein, the The preset distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object to the candidate trajectory; wherein, M and N are both integers greater than 0.
  • the estimated end point determination sub-module includes:
  • the preset area determining unit is configured to determine the preset area of the reference end point according to the width of the road where the reference end point is located;
  • the grid dividing unit is configured to divide the preset area of the reference end point into M grids of the same size, and use the centers of the M grids as the M estimated end points.
  • the candidate trajectory generation sub-module includes:
  • a midpoint determining unit configured to determine the midpoint of the line between the last sampling point in the position information of the moving object and the reference end point;
  • a pre-estimation point determining unit configured to determine N pre-estimation points according to the N preset distances and the midpoint;
  • M ⁇ N candidate trajectory generating units configured to generate M ⁇ N candidate trajectories according to the N pre-estimated points and the M estimated end points;
  • the candidate trajectory screening unit is configured to screen the M ⁇ N candidate trajectories according to the environmental information to obtain the candidate trajectory set.
  • the reference end point prediction module 501 includes:
  • a candidate end point prediction sub-module configured to predict the candidate end point of the moving object according to the position information of the moving object through a neural network
  • the reference end point determination submodule is configured to determine the position information of the reference end point of the moving object according to the candidate end point.
  • the candidate end point prediction sub-module is further configured to input the position information of the moving object into the first neural network to predict the first candidate end point of the moving object;
  • the reference end point determination submodule is further configured to determine the position information of the reference end point of the moving object according to the first candidate end point and the environment information of the moving object.
  • the candidate endpoint prediction submodule is further configured to input the location information and environment information of the moving object into the second neural network to predict the second candidate endpoint of the moving object;
  • the reference end point determination submodule is further configured to determine the position information of the reference end point of the moving object according to the second candidate end point and the environment information.
  • the device further includes: a network training module configured to train the neural network;
  • the network training module includes:
  • the first network input sub-module is configured to input the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain the first predicted end point;
  • the first prediction loss determining submodule is configured to determine the first prediction loss of the neural network with respect to the first prediction end point according to the true value trajectory of the moving object;
  • the first network parameter adjustment submodule is configured to adjust the network parameters of the neural network according to the first prediction loss to train the neural network.
  • the network training module includes:
  • the second network input submodule is configured to input the location information of the moving object and the map information corresponding to the location information into the neural network to obtain a second predicted end point;
  • a second prediction loss determining sub-module configured to determine the second prediction loss of the neural network with respect to the second prediction end point according to the true value trajectory of the moving object
  • a deviation determining sub-module configured to determine the deviation between the second predicted end point and a preset constraint condition
  • the second prediction loss adjustment submodule is configured to adjust the second prediction loss of the second prediction endpoint according to the deviation to obtain a third prediction loss
  • the second network parameter adjustment sub-module is configured to adjust the network parameters of the neural network according to the third prediction loss to train the neural network.
  • an embodiment of the present disclosure further provides a computer program product.
  • the computer program product includes computer-executable instructions. After the computer-executable instructions are executed, the steps in the trajectory prediction method provided by the embodiments of the present disclosure can be implemented.
  • an embodiment of the present disclosure further provides a computer storage medium having computer-executable instructions stored thereon, and when the computer-executable instructions are executed by a processor, the trajectory prediction method provided by the above-mentioned embodiment is implemented. step.
  • FIG. 6 is a schematic diagram of the structure of the computer device according to an embodiment of the disclosure.
  • the device 600 includes: a processor 601, at least one communication bus, and communication Interface 602, at least one external communication interface and memory 603.
  • the communication interface 602 is configured to realize connection and communication between these components.
  • the communication interface 602 may include a display screen, and the external communication interface may include a standard wired interface and a wireless interface.
  • the processor 601 is configured to execute the image processing program in the memory to implement the steps of the method for predicting the target trajectory provided in the foregoing embodiment.
  • trajectory prediction device computer equipment, and storage medium embodiment
  • the description of the method embodiments of the present disclosure please refer to the description of the method embodiments of the present disclosure for understanding.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the various components shown or discussed may be through some interfaces, indirect coupling or communication connection between devices or units, and may be electrical, mechanical, or other driving. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present disclosure can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be realized by hardware driving, or by hardware plus software functional unit.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above-mentioned integrated unit of the present disclosure is realized by the driving of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks, or optical disks and other media that can store program codes.
  • the embodiments of the present disclosure provide a trajectory prediction method, device, equipment, and storage medium, wherein the position information of the reference end point of the moving object is determined according to the position information of the moving object; according to the position information of the moving object and the Determine the candidate trajectory set including multiple candidate trajectories with reference to the position information of the end point; wherein the position information of the end point of each candidate trajectory is different from the position information of the reference end point; determine the moving object from the set of candidate trajectories Target trajectory. In this way, the future motion trajectory of the moving object can be estimated more accurately.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供一种轨迹预测方法、装置、设备及存储介质,其中,根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,每一候选轨迹的终点的位置信息与所述参考终点的位置信息不同;从所述候选轨迹集合中确定所述移动对象的目标轨迹。如此,通过考虑移动对象的位置信息,预测移动对象的参考终点,推测出该移动对象可能行驶的多条候选轨迹,并从该多条候选轨迹中选择出一条最优的轨迹作为移动对象行驶的目标轨迹,从而更加准确的估计移动对象的未来运动轨迹。

Description

轨迹预测方法、装置、设备及存储介质资源
相关申请的交叉引用
本公开基于申请号为202010279772.9、申请日为2020年4月10日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开实施例涉及自动驾驶技术领域,涉及但不限于一种轨迹预测方法、装置、设备及存储介质。
背景技术
随着信息技术的发展,自动驾驶技术成为一大热点;实现自动驾驶技术,对于自动驾驶的车辆的轨迹预测是必不可少的;预测的轨迹的准确度决定了自动驾驶车辆行驶的安全性。
发明内容
有鉴于此,本公开实施例提供一种轨迹预测方法、装置、设备及存储介质。
本公开实施例的技术方案是这样实现的:
本公开实施例提供一种轨迹预测方法,所述方法应用于电子设备,所述方法包括:
根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;
根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,至少二条所述候选轨迹的终点的位置信息与所述参考终点的位置信息不同;
从所述候选轨迹集合中确定所述移动对象的目标轨迹。
在一些实施例中,所述移动对象的位置信息包括:所述移动对象的时序位置信息,或,所述移动对象的历史轨迹。如此,能够通过移动对象的历史轨迹和时序位置信息,预测该移动对象的候选轨迹。
在一些实施例中,所述参考终点包括预设限制类型之外的点,其中,所述预设限制类型至少包括以下之一:道路边缘点、障碍物、行人。如此,将道路边缘点、障碍物、行人等作为限制类型,能够提高最终预测出的目标轨迹的合理性。
在一些实施例中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:根据所述移动对象的位置信息获取所述移动对象的环境信息,所述环境信息包括以下至少之一:道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息、其他移动对象信息;根据所述环境信息确定所述移动对象的参考终点的位置信息。如此,通过移动对象周围的环境信息,结合该移动对象的位置信息,能够准确预测移动对象的参考终点的位置信息。
在一些实施例中,所述根据所述移动对象的位置信息获取所述移动对象的环境信息,包括:根据所述移动对象采集的图像信息,确定所述环境信息;和/或,根据所述移动对象接收到表征当前环境的通信信息,确定所述环境信息。如此,通过分析移动对象的通信信息和图像信息,能够在参考终点中排除包括在预设限制类型的点,从而得到移动对象的参考终点。
在一些实施例中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:根据所述移动对象的位置信息,确定所述移动对象的至少一条参考路线;根据所述参考路线,确定所述参考终点的位置信息。如此,能够提高预测的参考终点的准确度。
在一些实施例中,所述根据所述参考路线,确定所述参考终点的位置信息,包括:确定所述参考路线的可行驶区域;根据所述移动对象的位置信息,确定所述移动对象在所述可行驶区域中的参考终点的位置信息。如此,提高了每一条参考路线上的可行驶区域的有效性。
在一些实施例中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:根据所述移动对象的位置信息确定所述移动对象所处道路的路口信息;响应于所述路口信息表示存在至少二个路口,确定所述移动对象的多个参考终点的位置信息;其中,不同路口的参考终点不同。如此,能够减少遗漏参考终点,从而提高了确定的目标轨迹的准确度。
在一些实施例中,所述根据所述候选轨迹集合,确定所述移动对象的目标轨迹,包括:确定所述候选轨迹集合中的候选轨迹的置信度;根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹。如此,从多条候选轨迹中选择出一条最优的轨迹作为移动对象行驶的目标轨迹,从而更加准确的估计移动对象的未来运动轨迹。
在一些实施例中,根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹之前,还包括:确定所述候选轨迹集合中至少一候选轨迹的轨迹参数修正值;根据所述轨迹参数修正值对所述候选轨迹集合中的候选轨迹进行调整,得到更新的候选轨迹集合;根据所述移动对象的行驶信息和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的所述目标轨迹。如此,根据所述轨迹参数修正值对所述候选轨迹进行调整,以提升得到的目标轨迹的合理性。
在一些实施例中,根据所述移动对象的行驶信息和所述置信度,从更新的候选轨迹集合中确定所述移动对象的所述目标轨迹,包括:根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域;根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹。如此,能够达到筛选候选轨迹的目的。
在一些实施例中,所述根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域,包括:根据所述移动对象的环境信息,确定所述移动对象的预测可行驶区域;根据所述移动对象的控制信息,对所述预测可行驶区域进行调整,得到所述可行驶区域。如此,通过移动对象的控制信息对所述预测可行驶区域进行缩小,得到更加精准的可行驶区域。
在一些实施例中,根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹,包括:确定所述更新的候选轨迹集合中包含于所述可行驶区域的候选轨迹,得到所述待确定目标轨迹集合;将所述待确定目标轨迹集合中置信度最大的轨迹或者置信度大于预设置信度阈值的轨迹,确定为所述目标轨迹。如此,将该确定目标轨迹集合中置信度最大的候选轨迹,作为目标轨迹,这样充分提高了预测的车辆的目标轨迹的准确度。
在一些实施例中,所述根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合,包括:在包含所述参考终点的预设区域内,确定M个估计终点;根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合;其中,所述预设距离用于表明所述移动对象的位置信息中最后一个采样点与所述参考终点连线的中点到所述候选轨迹的距离;其中,M和N均为大于0的整数。如此,通过多个预估计点和估计终点,能够拟合得到多个候选轨迹。
在一些实施例中,所述在包含所述参考终点的预设区域内,确定M个估计终点,包括:根据所述参考终点所处道路的宽度,确定所述参考终点的预设区域;将所述参考终点的预设区域划分为M个预定尺寸的网格,将M个网格的中心作为所述M个估计终点。如此,将M个网格的中心作为所述M个估计终点,能够提高预测候选轨迹的可能的终点的准确度。
在一些实施例中,所述根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合,包括:确定所述移动对象的位置信息中的最后一个采样点与所述参考终点连线的中点;根据所述N个预设距离和所述中点,确定N个预估计点;根据所述N个预估计点和所述M个估计终点,生成M×N个候选轨迹;根据所述环境信息,对所述M×N个候选轨迹进行筛选,得到所述候选轨迹集合。这样,通过设定约束条件,将M×N个候选轨迹中不满足约束条件的轨迹剔除,得到更加精确的候选轨迹集合。
在一些实施例中,根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点;根据所述候选终点确定所述移动对象的参考终点的位置信息。如此,采用训练好的神经网络对移动对象的参考终点进行预测,既能够提高预测的准确度,还能够加快预测速度。
在一些实施例中,通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点,包括:将所述移动对象的位置信息输入第一神经网络以预测所述移动对象的第一候选终点;所述根据所述候选终点确定所述移动对象的参考终点的位置信息,包括:根据所述第一候选终点和所述移动对象的环境信息,确定所述移动对象的参考终点的位置信息。如此,将预测到的与行人或人行道重叠或者超出道路边缘的第一候选终点进行调整,以得到准确率较高的参考终点的位置信息。
在一些实施例中,通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点,包括:将所述移动对象的位置信息和环境信息输入第二神经网络以预测所述移动对象的第二候选终点;所述根据所述候选终点确定所述 移动对象的参考终点的位置信息,包括:根据所述第二候选终点和所述环境信息,确定所述移动对象的所述参考终点的位置信息。如此,使得预测结果更可靠,提高了实际应用的安全性。
在一些实施例中,所述神经网络的训练方法包括:将所述移动对象的位置信息,和/或,所述移动对象的位置信息和移动对象采集的道路图像输入神经网络中,得到第一预测终点;根据所述移动对象的真值轨迹,确定所述神经网络关于第一预测终点的第一预测损失;根据所述第一预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。如此,采用该第一预测损失对神经网络的权重等参数进行调整,从而使得调整后的神经网络分类结果更加准确。
在一些实施例中,所述神经网络的训练方法包括:将所述移动对象的位置信息和所述位置信息对应的地图信息输入所述神经网络中,得到第二预测终点;根据所述移动对象的真值轨迹,确定所述神经网络关于所述第二预测终点的第二预测损失;确定所述第二预测终点与预设的约束条件之间的偏差;根据所述偏差,对所述第二预测终点的第二预测损失进行调整,得到第三预测损失;根据所述第三预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。如此,使得该神经网络输出的目标轨迹准确度更高。
本公开实施例提供一种轨迹预测装置,所述装置包括:参考终点预测模块,配置为根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;候选轨迹确定模块,配置为根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,至少二条所述候选轨迹的终点的位置信息与所述参考终点的位置信息不同;目标轨迹确定模块,配置为从所述候选轨迹集合中确定所述移动对象的目标轨迹。
对应地,本公开实施例提供一种计算机存储介质,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现上述所述的方法步骤。
本公开实施例提供一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时可实现上述所述的方法步骤。
本公开实施例提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行配置为实现上述任意一项所述的轨迹预测方法。
本公开实施例提供一种轨迹预测方法、装置、设备及存储介质,其中,首先,根据移动对象所处当前位置的位置信息,预测移动对象的参考终点;然后,根据所述参考终点和历史轨迹,确定由移动对象的多条候选轨迹组成的候选轨迹集合;最后,从候选轨迹集合中,确定移动对象的目标轨迹;如此,通过考虑移动对象的位置信息,预测移动对象的参考终点,推测出该移动对象可能行驶的多条候选轨迹,并从该多条候选轨迹中选择出一条最优的轨迹作为移动对象行驶的目标轨迹,从而更加准确的估计移动对象的未来运动轨迹。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开实施例的实施例,并与说明书一起用于说明本公开实施例的技术方案。
图1A为可以应用本公开实施例的轨迹预测方法的一种系统架构示意图;
图1B为本公开实施例轨迹预测方法的实现流程示意图;
图1C为本公开实施例轨迹预测方法的实现流程示意图;
图2A为本公开实施例轨迹预测方法的另一实现流程示意图;
图2B为本公开实施例神经网路训练方法的实现流程示意图;
图3A为本公开实施例候选轨迹网络的实现结构示意图;
图3B为本公开实施例候选轨迹网络的实现结构示意图;
图4A为本公开实施例候选轨迹生成的结构示意图;
图4B为本公开实施例在多条参考路线上生成候选轨迹的流程示意图;
图5为本公开实施例轨迹预测装置结构组成示意图;
图6为本公开实施例计算机设备的组成结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对发明的具体技术方案做进一步详细描述。以下实施例用于说明本公开,但不用来限制本公开的范围。
本实施例提出一种轨迹预测方法应用于计算机设备,所述计算机设备可包括移动对象或不移动对象,该方法所实现的功能可以通过计算机设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该计算机设备至少包括处理器和存储介质。
图1A为可以应用本公开实施例的轨迹预测方法的一种系统架构示意图;如图1A所示,该系统架构中包括:车辆终端131、网络132和轨迹预测终端133。为实现支撑一个示例性应用,车辆终端131和轨迹预测终端133可以通过网络132建立通信连接,车辆终端131通过网络202向轨迹预测终端133上报位置信息(或者,轨迹预测终端133自动获取车辆终端131的位置信息),轨迹预测终端133响应于接收到的位置信息,确定该车辆的参考终点的位置信息;然后,通过车辆的位置信息和参考终点的位置信息,预测出多条候选轨迹;最后,轨迹预测终端133从这多条候选轨迹中,选择出车辆的目标轨迹。
作为示例,车辆终端131可以包括车载图像采集设备,轨迹预测终端133可以包括具有视觉信息处理能力的车载视觉处理设备或远程服务器。网络132可以采用有线连接或无线连接方式。其中,在轨迹预测终端133为车载视觉处理设备的情况下,车辆终端131可以通过有线连接的方式与车载视觉处理设备通信连接,例如通过总线进行数据通信;在异常坐姿识别端为远程服务器的情况下,车辆终端可以通过无线网络与远程服务器进行数据交互。
或者,在一些场景中,车辆终端131可以是带有车载图像采集模组的车载视觉处理设备,具体实现为带有摄像头的车载主机。这时,本公开实施例的轨迹预测方法可以由车辆终端131执行,上述系统架构可以不包含网络和轨迹预测终端。
图1B为本公开实施例轨迹预测方法的实现流程示意图,如图1B所示,结合如图1B所示方法进行说明:
步骤S101,根据移动对象的位置信息,确定移动对象的参考终点的位置信息。
在一些可能的实现方式中,所述移动对象包括:各种各样功能的车辆、各种轮数的车辆等、机器人、飞行器、导盲器、智能家具设备或智能玩具等。下面不妨以车辆为例进行说明。所述参考终点包括预设限制类型之外的点,其中,所述预设限制类型至少包括以下之一:道路边缘点,障碍物、行人。也就是说,参考终点不包括道路边缘的点、道路中存在障碍物的点、道路中有行人的点等。移动对象的位置信息,包括:所述移动对象的时序位置信息,或,所述移动对象的历史轨迹。确定参考终点可以通过以下多种方式实现,比如,首先,不使用道路编码后的图像作为网络输入,通过移动对象的位置信息预测参考终点;或者,使用道路编码后的图像作为网络输入,结合移动对象的位置信息,预测出参考终点;然后,通过对预测的参考终点进行约束,比如,实现设定一个特定区域,仅落在特定区域的点,才确定为参考终点。在步骤S101中确定参考终点的方法,可以采用任何方法,只要是基于移动物体的位置信息确定的方法即可。
例如,可以采用位置信息确定终点的方法和诸如特征学习或强化学习的机器学习模型。神经网络可以至少输入移动对象的位置信息并输出终点或包括终点的轨迹。在该方法中,可以通过使用移动对象周围的位置信息和环境信息作为神经网络的输入来确定终点。或者,可以使用移动对象的位置信息作为神经网络的输入,并且使用神经网络的输出和移动对象周围的环境信息来确定终点。此外,可以使用移动对象周围的位置信息和环境信息作为神经网络的输入,并使用神经网络的输出和移动对象周围的环境信息来确定参考终点。例如,可以将移动对象的轨迹确定为神经网络的输出,基于移动对象周围的环境信息来调整确定的候选轨迹,以使该候选轨迹不与行人或人行道重叠,确定优化后的候选轨迹中包含的参考终点。
作为确定参考终点的另一种方法,可以采用使用位置信息和移动对象的运动学模型的方法。在该方法中,可以使用位置信息,移动对象的运动模型以及移动对象周围的环境信息来确定参考终点。
在一个具体例子中,所述步骤S101还可以通过以下两种方式实现,一是:首先,对所述车辆的位置信息(比如,历史轨迹)进行采样,得到采样点集合;然后,采用所述预设神经网络对所述采样集合进行特征提取;最后,将提取到的特征输入所述预设神经网络的全连接层,得到所述参考终点。二是:对车辆的位置信息进行采样,结合车辆的当前运行速度,即可预测出车辆在预设时间段内的参考终点。在另一个具体例子中,位置信息可以是以当前时间为时间终点的预设时段内车辆的运行轨迹,比如,以当前时间为终点,3秒内的运行轨迹;然后,以0.3秒为步长,对该3秒内的历史轨迹进行采样,最后,以得到的采样点为先验信息,预测移动对象的参考终点。
步骤S102,根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合。
在一些可能的实现方式中,至少二条所述候选轨迹的终点的位置信息与所述参考终点的位置信息不同。也就是说,候选轨迹集合中包括一些(比如,一条候选轨迹)终点的位置信息与参考终点的位置信息相同的候选轨迹,还包括一 些终点的位置信息与参考终点的位置信息不同的候选轨迹;这样,在容差范围内确定多条候选轨迹,既可以使得确定的多条候选轨迹是合理的,而且还丰富了候选轨迹的多样性,进而能够从丰富的候选轨迹中筛选出目标轨迹,提高了预测的目标轨迹的准确率。上述步骤S102可以通过以下过程实现:首先,在包含所述参考终点的预设区域内,确定M个估计终点;所述M个预估计终点中包含所述参考终点。然后,根据所述历史运行轨迹、M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合;其中,所述预设距离用于表明历史运行轨迹中最后一个采样点与所述终点连线的中点到所述候选轨迹的距离;M和N均为大于0的整数。所述候选轨迹集合为包含多条候选轨迹的曲线集合。所述候选轨迹的轨迹参数包括:估计终点的坐标和候选轨迹的中点距离历史轨迹最后一个采样点和该估计终点之间连线的距离。轨迹参数的修正值为估计终点的坐标和距离的修正值,基于修正值调整候选轨迹的曲线形状,以使调整后的候选轨迹更趋于合理。
步骤S103,从所述候选轨迹集合中确定所述移动对象的目标轨迹。
在一些可能的实现方式中,按照每一候选轨迹的置信度和行驶信息,从多个候选轨迹中选出目标轨迹,以使移动对象按照该目标轨迹行驶。
在本公开实施例中,能够有效解决相关技术中输出车辆未来时间序列对应的离散坐标点,使用离散坐标点表示车辆未来轨迹难以反映车辆未来行驶的趋势在实际应用中作用较小的问题。通过预测移动对象的参考终点,推测出该移动对象可能行驶的多条候选轨迹,并从该多条候选轨迹中选择出一条最优的轨迹作为移动对象行驶的目标轨迹,从而更加准确的估计移动对象的未来运动轨迹。
在一些实施例中,可以通过移动对象周围的环境信息,结合该移动对象的位置信息,预测移动对象的参考终点的位置信息,可以通过以下步骤实现:
首先,根据所述移动对象的位置信息获取所述移动对象的环境信息。
比如,根据移动对象的历史轨迹,获取该历史轨迹周围的环境信息。比如,获取该历史轨迹周围的道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息或其他移动对象信息等;其中,道路信息至少包括:当前路况(比如,拥堵情况)、路宽和道路的路口信息(比如,是否为交叉路口)等;障碍物信息包括:当前道路上是否设有路障,或者其他障碍物等;行人信息包括路上是否有行人以及行人的位置;交通灯信息至少包括:道路上设置的交通灯的数量以及交通灯是否正常工作等;交通标识信息包括:交通灯当前亮的灯的类别和时长等;交通规则信息至少包括:当前道路是靠右行驶还是靠左行驶、是单行道还是双行道、道路可通过的车辆类型等。在一些可能的实现方式中,根据所述移动对象的位置信息获取所述移动对象的环境信息,可以通过以下两种方式实现:
方式一:根据所述移动对象采集的图像信息,确定所述环境信息。
比如,首先通过移动对象上配置的摄像头,采集移动对象的历史轨迹周围的图像(比如,对移动对象周围的环境进行图像采集),得到图像信息,通过分析图像内容,得到该移动对象周围的环境信息。比如,对移动对象进行图像采集之后,得知移动对象的道路信息、障碍物信息、行人信息、交通灯信息等,通过综合分析这些信息,预测移动对象的可能的参考终点的位置信息,并在可能的参考终点中排除包括在预设限制类型中的点,从而得到该移动对象的参考终点。
方式二:根据所述移动对象接收到表征当前环境的通信信息,确定所述环境信息。
比如,移动对象采用通信设备接收其他设备发送的表征当前环境的通信信息,通过分析该通信信息,得到环境信息;其中,通信信息中至少包括该移动设备所处位置的环境参数;比如,道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息或其他移动对象信息等。
然后,根据所述环境信息确定所述移动对象的参考终点的位置信息。
在一些实施例中,在移动对象处于交叉路口的情况下,可以通过以下步骤确定移动对象的参考终点的位置信息:
第一步,根据所述移动对象的位置信息,确定所述移动对象所处道路的路口信息。
比如,根据移动对象的历史轨迹,判断移动对象沿着该历史轨迹继续运行的前方道路的路口信息,其中路口信息包括:路口数量、路口的交叉情况等。
第二步,响应于所述路口信息表示存在至少二个路口,确定所述移动对象的多个参考终点的位置信息。
这里,在移动对象所处道路的路口为多个路口的情况下,分别确定每一个路口对应的道路上的参考终点,即在每一个路口对应的道路上均预测可能的参考终点的位置信息,而且不同路口的参考终点不同。比如,移动对象所处道路的路口为十字路口,首先,确定十字路口中除移动对象的运行方向的反方向之外的三个路口,然后,分别预测三个路 口对应道路上的参考终点的位置信息。这样,预测得到多个参考终点,然后从这多个参考终点中选出置信度最大的目标终点,减少遗漏参考终点,从而提高了确定的目标轨迹的准确度。
在一些实施例中,为了提高预测的参考终点的准确度,所述步骤S101可以通过以下步骤实现:
第一步,根据所述移动对象的位置信息,确定所述移动对象的至少一条参考路线。
这里,将移动对象的位置信息中包括的当前的路况以及是否处于路口等信息输入神经网络中,预测出多个参考路线。比如,如果位置信息表明移动对象处于直行的单行道路上,那么参考路线有一个,为移动对象的移动方向上的单行道路上的一条路线;如果位置信息表明移动对象处于丁字路口,那么参考路线有三个,分别为沿着移动对象的移动方向的丁字路口中每一条道路上的路线。如果位置信息表明移动对象处于十字路口,那么参考路线有四个,分别为沿着移动对象的移动方向的十字路口中每一条道路上的路线。这样,结合移动对象的位置信息,综合考虑预测出移动对象可能行驶的多条参考路线。
第二步,根据所述参考路线,确定所述参考终点的位置信息。
在一些可能的实现方式中,从这多个参考路线中,确定出一条移动对象最有可能的未来行驶路线,并在该参考路线上,确定出移动对象的参考终点的位置信息。
在一些实施例中,首先,确定所述参考路线的可行驶区域(Freespace)。
比如,确定参考路线的路障信息和道路边缘。这里,确定每一条参考路线上的障碍物,比如,行人、故障车辆或者路障设施等。
然后,根据所述参考路线的路障信息和道路边缘,确定所述参考路线的可行驶区域。
这里,通过考虑每一条参考路线上的路障信息和道路边缘,划定该参考路线上的可行驶区域,比如,将参考路线的道路边缘以内,且路面上无路障的区域作为可行驶区域,这样提高了每一条参考路线上的可行驶区域的有效性。
然后,根据所述移动对象的位置信息,确定所述移动对象在所述可行驶区域中的参考终点的位置信息。这里,可以是根据所述移动对象的历史轨迹,预测所述移动对象在所述参考路线的可行驶区域中的参考终点。比如,在已经确定参考路线的可行驶区域的条件下,依据移动对象的历史轨迹,预测条参考路上在可行驶区域内的参考终点。
上述提供了一种预测运行轨迹的终点的方式,在该方式中,网络预测出轨迹的终点后,根据第一步的终点生成候选轨迹,每个候选轨迹对应的终点,并且候选轨迹的终点不能超出道路,也不能在有障碍物的地方(如行人),提高了预测的参考终点的有效性。
最后,根据所述参考路线上的参考终点和所述移动对象的位置信息,确定所述参考路线上的候选轨迹集合。
这里,对于每一条参考路线,通过在参考终点的附近预测多个可能存在终点的网格,结合历史轨迹中的最后一个采样点和多个特定的预设距离,确定出多个可能为运行轨迹上的预估计点,通过连接预估几点和多个参考终点,即得到该条参考路线上的候选轨迹集合;这样,得到每一条参考路线上的候选轨迹集合,并且从所述至少一个参考路线上的候选轨迹集合中,确定所述移动对象的目标轨迹。
如此,从多个参考路上的候选轨迹集合中,通过多次迭代,将最终满足约束条件的候选轨迹,确定为移动对象的最有可能的行驶轨迹,即目标行驶轨迹。
本公开实施例提供一种轨迹预测方法,以应用于移动对象,比如,应用于车辆为例进行说明,图1C为本公开实施例轨迹预测方法的实现流程示意图,如图1C所示,结合如图1C所示方法进行说明:
步骤S111,根据移动对象的历史轨迹,预测所述移动对象的参考终点。
步骤S112,在包含的参考终点的预设区域内,确定M个估计终点。
在一些可能的实现方式中,对于每一条参考路线,都在该参考路线的包含参考终点的预设区域内,确定出M个估计终点。所述预设区域为所述参考终点的周围区域,比如,以参考终点为中心,边长为100m的正方形,然后按照步长为5m,将该正方形划分为多个正方形网格,每一个网格的中心即为估计终点。首先,根据所述参考终点所处道路的宽度,确定所述参考终点的预设区域,比如,将参考路线的道路边缘以内的路面中包含参考终点的区域,作为预设区域;在一个具体例子中,道路宽度为4米,将包含参考终点的宽度为4米,长度为100米的区域作为预设区域;然后,将所述参考终点的预设区域划分为M个预定尺寸的网格,将M个网格的中心作为所述M个估计终点。比如,采用同等尺寸的网格,将网格的尺寸设置为10厘米。这样,将M个网格的中心作为所述M个估计终点,即候选轨迹的可能的终点,就得到了每一条参考路上的候选轨迹的可能的终点。
步骤S113,根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合。
在一些可能的实现方式中,所述预设距离用于表明所述移动对象的位置信息中最后一个采样点与所述参考终点连线的中点到所述候选轨迹的距离;其中,M和N均为大于0的整数。在一个具体例子中,首先,确定所述移动对象的位置信息中的最后一个采样点与所述参考终点连线的中点;其次,根据所述N个预设距离和所述中点,确定N个预估计点;该预估点为候选轨迹上的点,由于预设距离为移动对象的位置信息(比如,历史轨迹)中最后一个采样点与所述参考终点连线的中点到所述候选轨迹的距离,那么中点和预设距离确定之后,即可确定以该中点为垂足,距离该中点满足预设距离的预估计点;从而,N个预设距离即对应N个预估计点。再次,根据N个预估计点和M个估计终点中的每一估计终点,生成M×N个候选轨迹;即,基于N个预估计点和一个估计终点,可拟合得到N个候选轨迹,那么基于N个预估计点和M个估计终点,可拟合得到M×N个候选轨迹。最后,根据所述环境信息,对所述M×N个候选轨迹进行筛选,得到所述候选轨迹集合。其中,这些环境信息可以从图像中获取。比如,从图像中检测到障碍物,那候选轨迹就不能穿过障碍物。还可以在图像中获取道路信息,也可以在设置候选轨迹时使用。在生成候选轨迹的时候考虑周围的环境信息,设定约束条件,将M×N个候选轨迹中不满足约束条件的轨迹剔除,得到候选轨迹集合;比如,将穿过障碍物的候选轨迹剔除。
上述步骤S112和步骤S113给出了一种实现“确定由所述移动对象的多条候选轨迹组成的候选轨迹集合”的方式,在该方式中,通过在参考终点的周围确定多个可能是候选轨迹终点的估计终点,然后基于估计终点和预设距离,拟合得到多条选轨迹,这样采用曲线的表示方式来预测车辆的目标轨迹既可以反映轨迹的趋势,而且对噪声较为鲁棒,同时可扩展性较强。
步骤S114,确定所述候选轨迹集合中至少一候选轨迹的轨迹参数修正值。
在一些可能的实现方式中,所述步骤S114可以基于训练好的神经网络来输出每一候选轨迹的轨迹参数修正值,还可以采用但不限于下文提及的训练方法训练的神经网络来输出所述参数修正值。所述轨迹参数可以包括用于描述轨迹曲线的参数,例如轨迹参数可以包括但不限于:用于描述轨迹曲线的端点的坐标,和/或,轨迹曲线的中点与轨迹曲线两个端点的连线之间的距离等等。根据所述轨迹参数修正值对所述候选轨迹进行调整,以提升得到的目标轨迹的合理性,比如,修正值可包括但不限于:轨迹曲线的端点的坐标的调整值,和/或,轨迹曲线的中点与轨迹曲线两个端点的连线之间的距离的调整值。所述轨迹参数修正值可以是由本公开实施例训练得到的神经网络确定的,还可以是由以其他方式训练得到的神经网络确定的。
步骤S115,根据所述轨迹参数修正值对所述候选轨迹集合中的候选轨迹进行调整,得到更新的候选轨迹集合。
在一些可能的实现方式中,基于轨迹曲线的端点的坐标的调整值以及轨迹曲线的中点与轨迹曲线两个端点的连线之间的距离的调整值,对候选轨迹集合中的每一个候选轨迹进行修正,得到多个修正的候选轨迹,即更新的候选轨迹集合。如此,基于训练的神经网络输出的修正值对候选轨迹进行修正,从而提高了更新的候选集合中的候选轨迹的准确度。
步骤S116,根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹。
在一些可能的实现方式中,从更新的候选轨迹集合中筛选得到所述目标轨迹所述移动对象的行驶信息至少包括所述移动对象的路面信息和/或所述移动对象的控制信息;比如,路面信息包括:路面宽度、路面边缘和路面上的中心线等。移动对象的控制信息包括:行驶方向、行驶速度和车灯状态(比如,转向灯的状态)等。这里,首先,根据所述路面信息,确定所述移动对象的预测可行驶区域;所述可行驶区域如图3A所示,车辆的可行驶区域46。其中,所述路面信息至少包括:路面是否为同向车道、路面的宽度和路面的交叉口等;比如,路面信息表明该路段是同向车道且不是交叉口,那么预测可行驶区域最大为该车辆前方覆盖整个路面的区域,即该区域是单向的;如果路面信息表明该路段为十字路口,那么预测可行驶区域最大为该车辆四周覆盖整个路面的区域,即该区域是包含十字路口的三个方向(左转、直行和右转)的。
其次,确定所述更新的候选轨迹集合中未包含于所述预测可行驶区域中的待调整的候选轨迹;比如,确定候选轨迹中未包含在可行驶区域46中的候选轨迹。
再次,降低所述待调整的候选轨迹的置信度,得到调整的候选轨迹集合;与此同时,要增大包含在可行驶区域中的后选轨迹的置信度,这样能够更加明确的表明哪些候选轨最贴近最终的目标轨迹。
最后,根据所述移动对象的控制信息,对所述预测可行驶区域进行调整,得到所述可行驶区域;根据所述移动对象的控制信息对所述预测可行驶区域进行缩小,得到更加精准的可行驶区域。比如,路面信息表明该路段为十字路口,预设可行驶区域是包含十字路口的三个方向(左转、直行和右转)的,但是控制信息表明车辆要左转,那么预测可行驶的覆盖面积可以从覆盖三个方向(左转、直行和右转)缩小为只覆盖左转的方向,这样进一步的精确可行驶区域的覆盖面积,从而能够更加准确的确定出最终的车辆的目标轨迹。
第二步,根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹。
首先,确定所述更新的候选轨迹集合中包含于所述可行驶区域的候选轨迹,得到所述待确定目标轨迹集合;然后,将所述待确定目标轨迹集合中置信度大于预设置信度阈值的轨迹,确定为所述目标轨迹;比如,将该确定目标轨迹集合中置信度最大的候选轨迹,作为目标轨迹,这样充分提高了预测的车辆的目标轨迹的准确度。这样,将历史轨迹先预测出候选轨迹集合,然后根据控制信息和路面进行进一步的缩小候选轨迹应该归属的可行驶区域,从而达到筛选候选轨迹的目的。
在一些实施例中,步骤S116可以通过以下步骤实现:
步骤S161,确定所述候选轨迹集合中的候选轨迹的置信度。
在一些可能的实现方式中,所述置信度用于表明候选轨迹是目标轨迹的概率。所述置信度可以是由本公开实施例训练得到的神经网络确定的,还可以是由以其他方式训练得到的神经网络确定的。
步骤S162,根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定移动对象的所述目标轨迹。
在一些可能的实现方式中,将移动对象的路面信息和移动对象的控制信息中的一种或者二者的结合作为先验信息,对候选轨迹进行修正,使得最终得到的目标轨迹更加具有合理性。所述移动对象的控制信息可包括但不限于以下至少之一,发动机的运行状态、方向盘转向信息或速度控制信息(如减速、加速或刹车)。
所述步骤S162可以通过以下两个步骤实现:
第一步,根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域(Freespace)。
这里,移动对象的环境信息可以是路面信息,确定所述移动对象的可行驶区域包括以下多种方式:方式一:根据移动对象所在道路的路面信息确定所述移动对象的可行驶区域;方式二:根据移动对象的控制信息,确定所述移动对象的可行驶区域;方式三:根据所述移动对象所在道路的路面信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域。所述路面信息是指车辆当前运行时间所处的路面的信息,所述控制信息是在采集车辆的历史轨迹对应的时刻的车灯情况。比如,在采集历史轨迹的时刻车灯显示右转,那么控制信息即为右转,从而确定车辆的可行驶区域为右转所对应的路面区域。所述可行驶区域可以理解为供移动对象行驶的区域,比如,无障碍且允许通行的路面区域。
首先,根据所述路面信息,确定所述移动对象的预测可行驶区域。
所述可行驶区域如图3A所示,车辆的可行驶区域46。其中,所述路面信息可以但不限于包括以下至少之一:路面是否为同向车道、路面的宽度和路面的交叉口等;比如,路面信息表明该路段是同向车道且不是交叉口,那么预测可行驶区域最大为该车辆前方覆盖整个路面的区域,即该区域是单向的;如果路面信息表明该路段为十字路口,那么预测可行驶区域最大为该车辆四周覆盖整个路面的区域,即该区域是包含十字路口的三个方向(左转、直行和右转)。
其次,确定所述更新的候选轨迹集合中未包含于所述预测可行驶区域中的待调整的候选轨迹;比如,确定候选轨迹中未包含在可行驶区域46中的候选轨迹。
再次,降低所述待调整的候选轨迹的置信度,得到调整的候选轨迹集合;与此同时,要增大包含在可行驶区域中的后选轨迹的置信度,这样能够更加明确的表明哪些候选轨最贴近最终的目标轨迹。
最后,根据所述移动对象的控制信息,对所述预测可行驶区域进行调整,得到所述可行驶区域;比如,面信息表明该路段为十字路口,预设可行驶区域是包含十字路口的三个方向(左转、直行和右转),但是控制信息表明车辆要左转,那么预测可行驶的覆盖面积可以从覆盖三个方向(左转、直行和右转)缩小为只覆盖左转的方向,这样进一步的精确可行驶区域的覆盖面积,从而能够更加准确的确定出最终的车辆的目标轨迹。
第二步,根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹。
在一些可能的实现方式中,对所述更新的候选轨迹集合中的候选轨迹进行筛选,得到所述目标轨迹。首先,确定所述更新的候选轨迹集合中包含于所述可行驶区域的候选轨迹,得到所述待确定目标轨迹集合;然后,将所述待确定 目标轨迹集合中置信度大于预设置信度阈值的轨迹,确定为所述目标轨迹;比如,将该确定目标轨迹集合中置信度最大的候选轨迹,作为目标轨迹,这样充分提高了预测的车辆的目标轨迹的准确度。在一个具体例子中,以当前时间为时间终点,获取预设时段内车辆的运行轨迹作为历史轨迹,比如,3秒内的运行轨迹;然后,以这3秒内的历史轨迹和这3秒内车灯的朝向,作为先验信息,预测车辆在未来预设时段内的目标运行轨迹,比如,预测未来3秒内的运行轨迹;以此,为自动行驶的车辆提供准确度较高的未来行驶轨迹。
在本公开实施例中,通过采用车辆的控制信息对车辆的可行驶区域进行缩小,而且将包含在可行驶区域中的候选轨迹中置信度最大的候选轨迹,作为车辆的目标轨迹,从而使得预测结果更可靠,提高了实际应用的安全性。
本公开实施例提供一种轨迹预测方法,在该方法中,步骤S101可采用训练好的神经网络对移动对象的参考终点进行预测,如图2A所示,图2A为本公开实施例轨迹预测方法的另一实现流程示意图,结合图1进行以下说明:
步骤S201,通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点。
在一些可能的实现方式中,该神经网络为训练好的神经网络,可以采用以下多种方式训练得到:
方式一:首先,将所述移动对象的位置信息,和/或,所述移动对象的位置信息和移动对象采集的道路图像输入神经网络中,得到第一预测终点。
比如,将移动对象的位置信息作为神经网络的输入,预测出第一预测终点;或者,将移动对象的位置信息和移动对象采集的道路图像作为神经网络的输入,以预测第一预测终点。
其次,根据所述移动对象的真值轨迹,确定所述神经网络关于第一预测终点的第一预测损失。
在一些可能的实现方式中:将所述移动对象的位置信息,和/或,所述移动对象的位置信息和移动对象采集的道路图像输入神经网络中,得到多条候选轨迹,然后估计每一候选的粗略置信度,然后,结合真值轨迹确定候选轨迹集合中每一轨迹的准确度,将这一准确度反馈给神经网络,以使神经网络调整如权值参数等网络参数,从而提升神经网络分类的准确度。比如,得到100个候选轨迹,首先使用神经网络进行卷积反卷积等操作,得到这100个候选轨迹的置信度;由于在训练阶段,神经网络的参数是随机初始化的,这样就导致这100个候选轨迹的粗略估计的置信度也是随机的,那么如果想要提升神经网络预测的候选轨迹正确度,就需要告诉神经网络100个候选轨迹哪些是对的哪些是错的。基于此,采用对比函数,将100个候选轨迹与真值轨迹进行比较,如果候选轨迹与真值轨迹的相似度大于预设相似度阈值输出1,否则的输出0,这样对比函数将输出100个比对值(0,1)值;接下来,将这100个比对值输入神经网络,以使神经网络中采用损失函数对候选轨迹进行监督,从而对于对比值为1的候选轨迹增大该候选轨迹的置信度,对于对比值为0的候选轨迹减小该候选轨迹的置信度;如此,得到每一所述候选轨迹置信度,即得到所述候选轨迹的分类结果。最终,采用所述分类结果对应的轨迹预测损失,对所述神经网络的权值参数进行调整。
最后,根据所述第一预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
比如,所述权值参数为神经网络中神经元权重等。所述第一预测损失为第一类候选轨迹样本(比如,正样本)和所述第二类候选轨迹样本(比如,负样本)的交叉熵损失。采用该预测损失对神经网络的权重等参数进行调整,从而使得调整后的神经网络分类结果更加准确。
方式二:首先,将所述移动对象的位置信息和所述位置信息对应的地图信息输入所述神经网络中,得到第二预测终点。
在一些实施例中,地图信息至少包括当前道路的地理位置、路面宽度、道路边缘和路障信息等。
其次,根据所述移动对象的真值轨迹,确定所述神经网络关于所述第二预测终点的第二预测损失。
比如,将真值轨迹和第二预测终点作比较,确定出神经网络关于所述第二预测终点的第二预测损失。
再次,确定所述第二预测终点与预设的约束条件之间的偏差。
在一些实施例中,预设的约束条件包括道路中能够存在预测终点的区域,比如,道路中除去道路边缘点、障碍物和行人等之外的区域;比如,确定第二预测终点与能够存在预测终点的区域之间的偏差。
再次,根据所述偏差,对所述第二预测终点的第二预测损失进行调整,得到第三预测损失。
比如,该偏差比较大时,说明第二预测终点严重偏离能够存在预测终点的区域,将第二预测损失适当调大,以调整神经网络的网络参数。
最后,根据所述第三预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
上述方式一和方式二是对神经网络的训练过程,基于移动图像的位置信息和预测损失,进行多次迭代,以使训练 后的神经网络输出的候选轨迹的轨迹预测损失满足收敛条件,从而使得该神经网络输出的目标轨迹准确度更高。
步骤S202,根据所述候选终点确定所述移动对象的参考终点的位置信息。
在一些实施例中,根据神经网络输出的候选终点,确定出参考终点的位置信息,或者,将神经网络输出的候选终点和环境信息相结合,确定参考终点的位置信息。
在一些实施例中,步骤S201和步骤S202可以通过两种方式实现:
方式一:首先,将所述移动对象的位置信息输入第一神经网络以预测所述移动对象的第一候选终点。
比如,将移动对象的时序位置信息,或者,历史轨迹输入第一神经网络中,预测出移动对象的第一候选终点。
然后,根据所述第一候选终点和所述移动对象的环境信息,确定所述移动对象的参考终点的位置信息。
比如,将第一候选终点与移动对象的道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息和其他移动对象信息等环境信息,相结合,进行综合分析,将预测到的与行人或人行道重叠或者超出道路边缘的第一候选终点进行调整,以得到准确率较高的参考终点的位置信息。
方式二:首先,将所述移动对象的位置信息和环境信息输入第二神经网络以预测所述移动对象的第二候选终点。
比如,将移动对象的历史轨迹、道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息和其他移动对象信息等作为第二神经网络的输入,预测出移动对象的第二候选终点。
然后,根据所述第二候选终点和所述环境信息,确定所述移动对象的所述参考终点的位置信息。
比如,结合环境信息判断预测出的第二候选终点是否在可行驶区域内,在一个具体例子中,判断第二候选终点是否位于道路上的存在障碍物或者有行人的地方。也就是说,基于移动对象周围的环境信息来调整确定的第二候选终点,以使该第二候选终点不与行人或人行道重叠,确定调整后的轨迹中包含的运行终点。在本公开实施例中,采用真值轨迹、候选轨迹集合和预测损失对神经网络进行训练,使得训练好的神经网络能够输出更加贴近真值轨迹的目标轨迹,从而能够更好的应用于对移动对象的未来目标轨迹的预测,并提高了预测的目标轨迹的准确度。
本公开实施例提供一种轨迹预测的训练方法,图2B为本公开实施例神经网路训练方法的实现流程示意图,如图2B所示,结合图2B进行以下说明:
步骤S211,根据获取的移动对象的位置信息,确定所述移动对象的参考终点。
比如,根据移动对象的历史轨迹,确定移动对象的参考终点。
步骤S212,在包含参考终点的预设区域内,确定M个估计终点。
在一些可能的实现方式中,对于每一条参考路线,都在该参考路线的包含参考终点的预设区域内,确定出M个估计终点。首先,根据每一所述参考路线的宽度,确定每一所述参考路线的参考终点的预设区域;然后,将每一所述参考路线的参考终点的预设区域划分为M个同等尺寸的网格,将M个网格的中心作为所述M个估计终点。
步骤S213,根据所述历史轨迹、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合。
在一些可能的实现方式中,所述预设距离用于表明历史轨迹中最后一个采样点与所述参考终点连线的中点到所述候选轨迹的距离;其中,M和N均为大于0的整数。在一个具体例子中,首先,确定所述历史轨迹中的最后一个采样点与所述参考终点连线的中点;其次,根据所述N个预设距离和所述中点,确定N个预估计点;该预估点为候选轨迹上的点,由于预设距离为历史轨迹中最后一个采样点与所述参考终点连线的中点到所述候选轨迹的距离,那么中点和预设距离确定之后,即可确定以该中点为垂足,距离该重点满足预设距离的预估计点;从而,N个预设距离即对应N个预估计点。再次,根据所述N个预估计点和所述M个估计终点中的每一估计终点,生成M×N个候选轨迹;最后,根据所述位置信息中的环境信息,对所述M×N个候选轨迹进行筛选,得到所述候选轨迹集合。即,基于N个预估计点和一个估计终点,可拟合得到N个候选轨迹,那么基于N个预估计点和M个估计终点,可拟合得到M×N个候选轨迹。
步骤S214,确定所述候选轨迹集合中每一候选轨迹与所述真值轨迹之间的平均距离。
在一些可能的实现方式中,首先确定每一候选轨迹与真值轨迹之间的距离,然后得到的多个距离求平均。
步骤S215,将所述平均距离小于预设距离阈值的候选轨迹,确定为第一类候选轨迹样本。
在一些可能的实现方式中,平均距离小于预设距离阈值的候选轨迹表明,该候选轨迹与真值轨迹的差距较小,第一类候选轨迹样本还可以理解为是对比函数中输出值为1的候选轨迹。
步骤S216,将所述候选轨迹集合中除所述第一类候选轨迹样本之外的至少部分候选轨迹,确定为第二类候选轨迹样本。
在一些可能的实现方式中,首先,确定候选轨迹集合中除所述第一类候选轨迹样本之外的全部或部分候选轨迹,为第二类候选轨迹样本。例如,按照第二类候选轨迹样本与第一类候选轨迹样本3:1的比例,确定第二类候选轨迹样本的数量。第一类候选轨迹样本相对于第二类候选轨迹样本而言,更为接近真值轨迹,从某种角度也可以理解为,第一类候选轨迹样本相对于第二类候选轨迹样本更为可信。这样,将第二类候选轨迹样本与第一类候选轨迹样本的比例取为3:1,减少第二类候选轨迹样本数量过多的情况,对分类结果对应的轨迹预测损失产生主导作用,从而导致对神经网络的训练结果不理想。
上述步骤S214至步骤S216给出了一种实现将所述移动对象的真值轨迹和所述候选轨迹进行比对,以确定候选轨迹分类结果的方式,在该方式中通过确定第一类候选轨迹样本和第二类候选轨迹样本,完成了对候选轨迹进行分类的过程。在一些可能的实现方式中,将真值轨迹和候选轨迹均输入到比对函数中,如果候选轨迹与真值轨迹的相似度大于预设相似度阈值,对比函数输出1,否则,对比函数输出0。这样,进一步提升了分类的准确度。
步骤S217,确定所述第一类候选轨迹样本和所述第二类候选轨迹样本的交叉熵损失,所述交叉熵损失为所述轨迹预测损失。
步骤S218,所述采用所述分类结果对应的轨迹预测损失,对所述神经网络中的网络参数进行调整,以训练所述神经网络。
在本公开实施例中,采用数量较大的数据集轨迹对神经网络进行训练,由于该数据集包含场景复杂的城市场景,以自动驾驶车辆视角对数据进行采集,更贴近实际应用,所以基于该数据集训练的神经网络适用于各种场景的轨迹预测,从而使得训练好的神经网络预测的目标轨迹准确度更高。
在其他实施例中,在步骤S213之后,所述方法还包括:
步骤S231,采用所述神经网络,确定所述候选轨迹的轨迹参数调整值。
在一些可能的实现方式中,所述调整值可以是神经网络预测的候选轨迹与真值轨迹之间的预测偏差。
步骤S232,确定所述候选轨迹与真值轨迹之间的偏差。
在一些可能的实现方式中,偏差可以是真值轨迹的终点坐标和所述候选轨迹的终点坐标之间的真实差值。
步骤S233,根据所述偏差和调整值,确定调整的预测损失。
步骤S234,采用所述调整的损失,对所述预设的神经网络的权值参数进行调整,以使调整后的所述预设的神经网络输出的预测损失满足收敛条件。
在一些可能的实现方式中,所述调整损失为欧式距离损失,基于该欧式距离损失,对神经网络的权值参数进行调整,从而使得候选轨迹与真值轨迹的差距更小。
在本公开实施例中提供了一种知识候选轨迹网络,将先验知识整合到车辆轨迹预测中。首先,将车辆轨迹建模为由终点和距离参数r参数化的连续曲线,车辆轨迹模型对噪声具有鲁棒性,并提供了一种更灵活的方式将先验知识整合到轨迹中预测;然后,将车辆轨迹预测制定为候选轨迹生成和细化任务。将各种观察编码到网络中,生成基本特征。基于这些特征,生成一组初始候选轨迹,并在诸如路面信息和控制信息的先验信息的指导下执行候选轨迹约束,通过设置两个附加模块(分类模块和细化模块,其中,分类模块选择最佳候选轨迹,细化模块进行轨迹回归和预测终点),生成最终轨迹预测。如此,能够更自然地反映运动和意图,且抗噪声性能更强大,能够更灵活地将先验信息结合到学习渠道中。
同时,本公开实施例为了评估所提出的方法并更好地促进自动驾驶中的车辆预测研究,提供了大规模的车辆轨迹数据集以及新的评估标准。这个新的数据集在复杂的城市驾驶场景中包含数百万个车辆轨迹,每个车辆的信息更加丰富,例如车辆的控制信息和/或至少部分车辆的道路结构信息等。
本公开实施例,进行不同时间长度的实验并计算长度为T的轨迹的拟合误差。例如,时间T设置为6秒。在本公开实施例中,对预测的点进行三次曲线拟合之后的得到的拟合轨迹,即候选轨迹。
具有准确性和复杂性平衡的三次拟合曲线,如公式1所示:
y=ax 3+bx 2+cx+d       (1);
在公式(1)中,与大于2米每秒(m/s)的速度相比,总拟合误差0.29米(m)是可忽略的。由于曲线对参数敏感且难以优化,本公开实施例使用两个控制点,即终点和预设距离γ,以及历史轨迹上的采样点来表示曲线。
图3A为本公开实施例候选轨迹网络的实现结构示意图,如图3A所示,获取车辆41的位置信息p in、控制信息l in、 车辆朝向信息d in以及路面信息(比如,限行、道路宽度和平时的堵车情况等),这些信息都是自动驾驶系统的检测结果,除道路信息外都是待预测车辆的历史时段信息。道路信息是当前时刻待预测车辆周围的地图信息。通过基本特征编码模块生成基本特征42,基于这些基本特征,预测未来的终点,得到参考终点。通过遍历可能的终点和γ,得到一组作为候选轨迹的三次拟合曲线43;然后,将路面信息和路面上其他车辆的车灯状态420,作为约束条件,以约束所生成的候选轨迹,得到候选轨迹44(即包括多条候选轨迹的候选轨迹集合)。接下来,对候选轨迹44进行分类,得到分类结果45,其中,第一类候选轨迹样本和第二类候选轨迹样本,采用卷积层以生成候选轨迹特征。之后,分类模块根据基本功能和候选轨迹功能,确定车辆的可行驶区域46。通过创建一组可能的候选轨迹,本公开实施例通过知识候选轨迹网络选择回归更容易学习的合理轨迹,而且,先验知识可以更灵活和明确,并使轨迹更可靠。
图3B为本公开实施例候选轨迹网络的实现结构示意图,如图3B所示,整个过程分为两个阶段,在第一阶段81中,在基本特征编码模块808中,将历史轨迹P obs 801和周围道路信息r Tobs803输入编码网络CNN802,输出粗略预测的终点82,从图3B可以看出,通过输入道路周围的道路信息结合道路中线813,当车辆处于十字路口时,预测出多条参考路线,针对每一条参考路线,可以预测出在该十字路口的每一条道路上的可能的参考终点;通过考虑道路信息中的路障信息或者行人信息,以及道路的宽度等信息,在终点回归模块809中对输出的终点进行约束,得到回归的终点812,这样将对粗略的终点进行回归以减少搜索空间,然后在候选轨迹生成模块810中生成每一条参考路线上的候选轨迹(比如,得到包含多条候选轨迹的候选轨迹集合)。比如,图3B中参考路线804上的候选轨迹83。在第二阶段84的候选轨迹修正模块811中,对候选轨迹83输入用于分类的网络(CNN-ED)85中,进行分类86和修正87,输出最大置信度88,得到车辆的预测位置814,基于这些预测位置即可以生成最终的运行轨迹。在对候选轨迹进行修正的过程中,结合道路的路障信息和路宽,划定可行驶区域,将位于可行驶区域之外的候选轨迹剔除,即图3B所示的虚线的候选轨迹是位于可行驶区域之外的候选轨迹,将对其进行剔除。从图3B可以看出,车辆在未来几分钟行驶的实际的未来位置815与预测位置814基本吻合,这样通过在第一阶段完成预测轨迹的生成过程,基于深度学习的预测方法可以更具解释性和灵活性。给定生成的预测轨迹,知识候选轨迹网络的第二阶段通过选择最合理的轨迹,能够简化预测问题。此外,通过检查两级输出,可以方便地调试和解释可能的错误预测。
本公开实施例将基本特征编码模块设计为编码器-解码器网络,网络在时间间隔[0,T obs]中取(p,l,d,r)作为输入,其中p表示位置,l表示控制信息,d表示车的朝向,r表示本地道路信息。车辆(l,d)的属性是通过基于深度神经网络(Deep Neural Networks,DNN)的模型获得的。对于每个时间标记t,p t=(x t,y t)是车辆的坐标,l t=(b lt,l tt,r tt)分别表示制动灯,左转灯和右转灯,且分别为二进制值,d t=(dx t,dy t)是一个单位向量。道路信息由许多语义元素表示,例如,车道线,交叉步行等,并且与车辆的位置有关。,本公开实施例将道路信息分离为二进制掩码r=M作为输入,其中,M ij=1表示位置(i,j)是可驱动的。因此本公开实施例有四个输入特征,分别标记为p in={p 1,p 2,...,p n},l in={l 1,l 2,...,l n},d in={d 1,d 2,...,d n}和r in=r n,其中n是观察长度,即观察历史轨迹的时间长度。
采用三个编码器块首先提取不同输入的特征,然后将提取到的特征连接起来并插入解码器块,获得最终的基本特征。编码器和解码器分别由几个卷积和反卷积层组成。
在本申请实施例中,首先,采用基本特征,预测粗略终点以减少候选轨迹搜索空间。然后,采取两个步骤来生成候选轨迹。本公开实施例通过在预测的终点周围绘制网格来遍历可能的终点,即估计终点,表示为:p ep={(x e+step*i,y e+step*j)} i,j∈[-grid,grid];其中,p ep是可能的终点集,即估计终点集合,p e=(x e,y e)是预测的参考终点的坐标,step和grid分别表示步长和总遍历数。
基于输入点p in和估计终点(x pe,y pe),本公开实施例实际上可以拟合一组三次曲线。然而,本公开实施例发现只有p in和(x pe,y pe)有时不足以产生一些候选曲线,例如曲线轨迹,输入点p in中的所有点都是共线的。因此,本公开实施例将γ定义为候选轨迹的中点与最后输入点和终点连线之间的距离,用于控制曲线的弯曲程度。如图4A所示,给出了在一条参考路线上确定候选轨迹的方式,点51表示历史轨迹上最后一个采样点;点52表示基于历史轨迹预测的参考终点;点53和点54分别表示在点52的预设区域内划分的网格的中心,即估计终点;γ为点51与点52的连线的中点与候选轨迹的距离,γ的大小是事先设定的(比如,设置为(-2m,2m)之间的值),这样,可根据γ的取值、估计终点和历史轨迹上最后一个采样点,确定多个不同弯曲程度的候选轨迹(即包括多条候选轨迹的候选轨迹集合)。图4B为本公开实施例在多条参考路线上生成候选轨迹的流程示意图,如图4B所示,由于候选轨迹生成在第一阶段强烈依赖 于回归终点,这可能会导致生成的候选轨迹的多形式性较弱。由于道路对车辆有严格的约束,因此多模式候选轨迹生成会利用道路信息来生成多个终点。从图4B可以看出,车辆当前位置为十字路口,根据道路信息的基本信息(比如,车道线901(图4B中道路上的参考线)和运行方向等)以及车辆的历史轨迹902,可以获得一组参考路线91(分别位于十字路口的每一条道路上,比如,参考线904、905和906),这些参考路线代表了车辆可能到达的中心车道线。因此公式(2)可以扩展为针对不同参考路线生成多个候选轨迹集合。
在一些实施例中,首先,预测沿着参考路线的相对的参考终点93位置坐标;然后,对于每一个预测的参考终点93采用网格的形式在参考终点93的附近创建一个网格包围参考终点93,以实现遍历终点907。最后,根据遍历后的终点,对每条参考路线上的未来终点进行采样,从而减少了对单个回归终点的依赖性,并确保了强大的多模态。从图4B来看,对候选轨迹中超出道路边缘的参考终点进行调整,说明该点是不合理的,最后,确定出车辆的未来位置903,并使用公式(2)为每个采样的终点生成候选轨迹92。
这里,一条参考路上的候选轨迹可以用公式(2)表示:
proposals={f(p in,p' ep,γ)}         (2);
其中,f()表示三次多项式拟合函数,p' ep∈p ep和γ∈[-2,-1,0,1,2]。
在训练阶段,二进制类标签,表示良好的轨迹与否,被分配给每个候选轨迹。本公开实施例将均匀采样真值轨迹的上的点和候选轨迹之间的平均距离定义为候选轨迹的标准,如公式(3)所示:
Figure PCTCN2021085448-appb-000001
其中,N是采样点的数量,
Figure PCTCN2021085448-appb-000002
Figure PCTCN2021085448-appb-000003
分别是真值轨迹和候选轨迹的第i个采样点。本公开实施例将候选轨迹的AD值低于预设阈值的,确定为正样本,例如预设阈值为2m,将候选轨迹和真值轨迹之间的平均距离小于2m的候选轨迹,确定为正样本,表明该候选轨迹与真值轨迹的差距较小,更贴近真值轨迹。其余,剩余的候选轨迹为潜在的负样本。为了减少过多负样本的压倒性的影响,本公开实施例采用均匀采样方法将负样本与正样本之间的比例保持为3:1。
对于得到的正负样本的修正,本公开实施例采用2坐标和1变量的参数化,如公式(4)所示:
Figure PCTCN2021085448-appb-000004
其中,
Figure PCTCN2021085448-appb-000005
Figure PCTCN2021085448-appb-000006
分别为真值轨迹和候选轨迹的终点坐标。t x,t y和t γ是受监督的信息。
本公开实施例将多任务损失函数最小化定义为如公式(5)所示:
Figure PCTCN2021085448-appb-000007
其中c i和t i是候选轨迹的置信度和轨迹参数,
Figure PCTCN2021085448-appb-000008
Figure PCTCN2021085448-appb-000009
是对应的真值轨迹的置信度和轨迹参数,α是权重项。L cls表示两类样本的损失函数,在本实施例中采用两类样本的交叉熵损失作为L cls;L ref表示对于修改正的轨迹参数的损失函数,本公开实施例使用欧几里德损失作为L ref。由于轨迹的多模态特征,本公开实施例使用正样本和部分随机采样的负样本来计算细化损失函数,并使用β来控制采样负样本的比率。
车辆的未来轨迹不仅受历史影响,还受到道路结构和控制信息等规则的限制。结合这些规则,可作出更可靠的未来的目标轨迹的预测。本公开实施例的知识候选轨迹网络可以有效地解决这些问题,得到非常可靠的预测轨迹。
本公开实施例结合历史运行轨迹和高清地图,可以确定由车辆可以在未来行驶的车道线组成的多边形区域,即可行驶区域。一些实施例中,确定可行驶区域的基本规则是车辆只能在相同方向的车道上行驶。
在一些实施例中,如果存在转弯的信号(输入轨迹或灯的意图),则多边形区域是目的地车道,否则可行驶区域由所有可能的车道组成。在获得可行驶区域后,本公开实施例提出了两种实施道路约束的方法,比如不忽略可行驶区域 之外的候选轨迹方法和忽略可行驶区域之外的候选轨迹方法。不忽略可行驶区域之外的候选轨迹方法将可行驶区域作为输入功能,并隐式监督模型以学习此类规则。本公开实施例提出忽略可行驶区域之外的候选轨迹来明确约束推断中的候选轨迹。对于忽略可行驶区域之外的候选轨迹,在生成时忽略可行驶区域之外的候选轨迹。此外,本公开实施例通过在测试期间衰减可行驶区域之外的候选轨迹的分类得分,来约束候选轨迹,如公式(6)所示:
Figure PCTCN2021085448-appb-000010
其中r表示候选轨迹指向可行驶区域之外的概率,σ表示衰减因子。
控制信息是指示车辆注意的明确信号。与道路约束类似,控制信息将限制可行驶区域在某个方向上,以可以用于进一步缩小道路约束产生的可行驶区域。对于交叉路口的车辆,可行驶区域是向四个方面完全打开的,本公开实施例使用转向灯的提示来选择一条独特的道路作为可驱动的面罩,从而能够缩小可驱动掩模。对于车道内的车辆,本公开实施例还可以选择在测试期间衰减相应候选轨迹的分数,如公式6所示。
本公开实施例重新定义了车辆轨迹,从而能够可靠的车辆运动预测;这样很好地反映了车辆运动的趋势和意图,并且对噪声具有鲁棒性;而且收集了大量且信息丰富的数据集,并对数据集进行了实验,证明了本公开实施例提出的方法的有效性。与此同时,更多规范化的规则,如交通灯,可以很容易地扩展到本公开实施例的方案中。
需要说明的是,以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开装置实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
需要说明的是,本公开实施例中,如果以软件功能模块的行驶实现上述的轨迹预测方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的行驶体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是终端、服务器等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本公开实施例不限制于任何特定的硬件和软件结合。
本公开实施例提供一种轨迹预测装置,图5为本公开实施例轨迹预测装置结构组成示意图,如图5所示,所述装置500包括:
参考终点预测模块501,配置为根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;
这里,参考终点预测模块501确定终点的方法,可以采用任何方法,基于移动物体的位置信息确定的方法即可。
例如,可以采用使用位置信息确定终点的方法和诸如特征学习或强化学习的机器学习模型。机器学习模型可以至少输入移动对象的位置信息并输出终点或包括终点的轨迹。在该方法中,参考终点预测模块501可以通过使用移动对象周围的位置信息和环境信息作为机器学习模型的输入来确定终点。或者,参考终点预测模块501可以使用移动对象的位置信息作为机器学习模型的输入,并且使用机器学习模型的输出和移动对象周围的环境信息来确定终点。此外,参考终点预测模块501可以使用移动对象周围的位置信息和环境信息作为机器学习模型的输入,并使用机器学习模型的输出和移动对象周围的环境信息来确定终点。例如,参考终点预测模块501可以将移动对象的轨迹确定为机器学习模型的输出,基于移动对象周围的环境信息来调整确定的轨迹,以使该轨迹不与行人或人行道重叠,确定调整后的轨迹中包含的终点。
在另一种确定终点的方法中,可以采用使用位置信息和移动对象的运动学模型(运动学模型)的方法。在该方法中,参考终点预测模块501可以使用位置信息,移动对象的运动模型以及移动对象周围的环境信息来确定终点。
候选轨迹确定模块502,配置为根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,每一候选轨迹的终点的位置信息与所述参考终点的位置信息不同;
目标轨迹确定模块503,配置为从所述候选轨迹集合中确定所述移动对象的目标轨迹。
在一些实施例中,所述移动对象的位置信息包括:所述移动对象的时序位置信息,或者,所述移动对象的历史轨迹。
在一些实施例中,所述参考终点包括预设限制类型之外的点,其中,所述预设限制类型至少包括以下之一:道路边缘点、障碍物、行人。
在一些实施例中,所述参考终点预测模块501,包括:
环境信息获取子模块,配置为根据所述移动对象的位置信息获取所述移动对象的环境信息,所述环境信息包括以 下至少之一:道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息、其他移动对象信息。
第一参考终点预测子模块,配置为根据所述环境信息确定所述移动对象的参考终点的位置信息。
在一些实施例中,所述环境信息获取子模块,还配置为:
根据所述移动对象采集的图像信息,确定所述环境信息;
和/或,
根据所述移动对象接收到表征当前环境的通信信息,确定所述环境信息。
在一些实施例中,所述参考终点预测模块501,包括:
参考路线确定子模块,配置为根据所述移动对象的位置信息,确定所述移动对象的至少一条参考路线;
第二参考终点预测子模块,配置为根据所述参考路线,确定所述参考终点的位置信息。
在一些实施例中,所述第二参考终点预测子模块,包括:
可行驶区域确定单元,配置为确定所述参考路线的可行驶区域;
参考终点预测单元,配置为根据所述移动对象的位置信息,确定所述移动对象在所述可行驶区域中的参考终点的位置信息。
在一些实施例中,所述参考终点预测模块501,包括:
路口确定子模块,配置为根据所述移动对象的位置信息确定所述移动对象所处道路的路口信息;
多参考终点确定子模块,配置为响应于所述路口信息表示存在至少二个路口,确定所述移动对象的多个参考终点的位置信息;其中,不同路口的参考终点不同。
在一些实施例中,所述候选轨迹确定模块503,包括:
置信度确定子模块,配置为确定所述候选轨迹集合中的候选轨迹的置信度;
目标轨迹确定子模块,配置为根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹。
在一些实施例中,所述装置还包括:
修正值确定模块,配置为确定所述候选轨迹集合中至少一候选轨迹的轨迹参数修正值;
轨迹调整模块,配置为根据所述轨迹参数修正值对所述候选轨迹集合中的候选轨迹进行调整,以更新所述候选轨迹集合;
更新的目标轨迹确定模块,配置为根据所述移动对象的行驶信息和所述置信度,从更新的候选轨迹集合中确定所述移动对象的所述目标轨迹。
在一些实施例中,更新的目标轨迹确定模块,包括:
可行驶区域确定子模块,配置为根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域;
更新的目标轨迹确定子模块,配置为根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹。
在一些实施例中,所述可行驶区域确定子模块,包括:
预测可行驶区域确定单元,配置为根据所述移动对象的环境信息,确定所述移动对象的预测可行驶区域;
预测可行驶区域整单元,配置为根据所述移动对象的控制信息,对所述预测可行驶区域进行调整,得到所述可行驶区域。
在一些实施例中,更新的目标轨迹确定子模块,包括:
目标轨迹集合确定单元,配置为确定所述更新的候选轨迹集合中包含于所述可行驶区域的候选轨迹,得到所述待确定目标轨迹集合;
目标轨迹筛选单元,配置为将所述待确定目标轨迹集合中置信度最大的轨迹或者置信度大于预设置信度阈值的轨迹,确定为所述目标轨迹。
在一些实施例中,所述候选轨迹确定模块502,包括:
估计终点确定子模块,配置为在包含所述参考终点的预设区域内,确定M个估计终点;
候选轨迹生成子模块,配置为根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合;其中,所述预设距离用于表明所述移动对象的位置信息中最后一个采样点与所 述参考终点连线的中点到所述候选轨迹的距离;其中,M和N均为大于0的整数。
在一些实施例中,所述估计终点确定子模块,包括:
预设区域确定单元,配置为根据所述参考终点所处道路的宽度,确定所述参考终点的预设区域;
网格划分单元,配置为将所述参考终点的预设区域划分为M个同等尺寸的网格,将M个网格的中心作为所述M个估计终点。
在一些实施例中,所述候选轨迹生成子模块,,包括:
中点确定单元,配置为确定所述移动对象的位置信息中的最后一个采样点与所述参考终点连线的中点;
预估计点确定单元,配置为根据所述N个预设距离和所述中点,确定N个预估计点;
M×N个候选轨迹生成单元,配置为根据所述N个预估计点和所述M个估计终点,生成M×N个候选轨迹;
候选轨迹筛选单元,配置为根据所述环境信息,对所述M×N个候选轨迹进行筛选,得到所述候选轨迹集合。
在一些实施例中,参考终点预测模块501,包括:
候选终点预测子模块,配置为通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点;
参考终点确定子模块,配置为根据所述候选终点确定所述移动对象的参考终点的位置信息。
在一些实施例中,候选终点预测子模块,还配置为将所述移动对象的位置信息输入第一神经网络以预测所述移动对象的第一候选终点;
所述参考终点确定子模块,还配置为根据所述第一候选终点和所述移动对象的环境信息,确定所述移动对象的参考终点的位置信息。
在一些实施例中,候选终点预测子模块,还配置为将所述移动对象的位置信息和环境信息输入第二神经网络以预测所述移动对象的第二候选终点;
所述参考终点确定子模块,还配置为根据所述第二候选终点和所述环境信息,确定所述移动对象的所述参考终点的位置信息。
在一些实施例中,所述装置还包括:网络训练模块,配置为训练所述神经网络;
所述网络训练模块,包括:
第一网络输入子模块,配置为将所述移动对象的位置信息,和/或,所述移动对象的位置信息和移动对象采集的道路图像输入神经网络中,得到第一预测终点;
第一预测损失确定子模块,配置为根据所述移动对象的真值轨迹,确定所述神经网络关于第一预测终点的第一预测损失;
第一网络参数调整子模块,配置为根据所述第一预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
在一些实施例中,所述网络训练模块,包括:
第二网络输入子模块,配置为将所述移动对象的位置信息和所述位置信息对应的地图信息输入所述神经网络中,得到第二预测终点;
第二预测损失确定子模块,配置为根据所述移动对象的真值轨迹,确定所述神经网络关于所述第二预测终点的第二预测损失;
偏差确定子模块,配置为确定所述第二预测终点与预设的约束条件之间的偏差;
第二预测损失调整子模块,配置为根据所述偏差,对所述第二预测终点的第二预测损失进行调整,得到第三预测损失;
第二网络参数调整子模块,配置为根据所述第三预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
对应地,本公开实施例再提供一种计算机程序产品,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现本公开实施例提供的轨迹预测方法中的步骤。
相应的,本公开实施例再提供一种计算机存储介质,所述计算机存储介质上存储有计算机可执行指令,所述该计算机可执行指令被处理器执行时实现上述实施例提供的轨迹预测方法的步骤。
相应的,本公开实施例提供一种计算机设备,图6为本公开实施例计算机设备的组成结构示意图,如图6所示,所述设备600包括:一个处理器601、至少一个通信总线、通信接口602、至少一个外部通信接口和存储器603。其中,通 信接口602配置为实现这些组件之间的连接通信。其中,通信接口602可以包括显示屏,外部通信接口可以包括标准的有线接口和无线接口。其中所述处理器601,配置为执行存储器中图像处理程序,以实现上述实施例提供的目标轨迹的预测方法的步骤。
以上轨迹预测装置、计算机设备和存储介质实施例的描述,与上述方法实施例的描述是类似的,具有同相应方法实施例相似的技术描述和有益效果,限于篇幅,可案件上述方法实施例的记载,故在此不再赘述。对于本公开轨迹预测装置、计算机设备和存储介质实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本公开的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本公开的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它行驶的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的行驶实现,也可以采用硬件加软件功能单元的行驶实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本公开上述集成的单元如果以软件功能模块的行驶实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的行驶体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。
工业实用性
本公开实施例提供一种轨迹预测方法、装置、设备及存储介质,其中,根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,每一候选轨迹的终点的位置信息与所述参考终点的位置信息不同;从所述候选轨迹集合中确定所述移动对象的目标轨迹。如此,能够更加准确的估计移动对象的未来运动轨迹。

Claims (24)

  1. 一种轨迹预测方法,其中,所述方法由电子设备执行,所述方法包括:
    根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;
    根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,至少二条所述候选轨迹的终点的位置信息与所述参考终点的位置信息不同;
    从所述候选轨迹集合中确定所述移动对象的目标轨迹。
  2. 根据权利要求1所述的方法,其中,所述移动对象的位置信息包括:所述移动对象的时序位置信息,或,所述移动对象的历史轨迹。
  3. 根据权利要求1所述的方法,其中,所述参考终点包括预设限制类型之外的点,其中,所述预设限制类型至少包括以下之一:道路边缘点、障碍物、行人。
  4. 根据权利要求1所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:
    根据所述移动对象的位置信息获取所述移动对象的环境信息,所述环境信息包括以下至少之一:道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息、其他移动对象信息;
    根据所述环境信息确定所述移动对象的参考终点的位置信息。
  5. 根据权利要求4所述的方法,其中,所述根据所述移动对象的位置信息获取所述移动对象的环境信息,包括:
    根据所述移动对象采集的图像信息,确定所述环境信息;
    和/或,
    根据所述移动对象接收到表征当前环境的通信信息,确定所述环境信息。
  6. 根据权利要求1所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:
    根据所述移动对象的位置信息,确定所述移动对象的至少一条参考路线;
    根据所述参考路线,确定所述参考终点的位置信息。
  7. 根据权利要求6所述的方法,其中,所述根据所述参考路线,确定所述参考终点的位置信息,包括:
    确定所述参考路线的可行驶区域;
    根据所述移动对象的位置信息,确定所述移动对象在所述可行驶区域中的参考终点的位置信息。
  8. 根据权利要求1所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:
    根据所述移动对象的位置信息确定所述移动对象所处道路的路口信息;
    响应于所述路口信息表示存在至少二个路口,确定所述移动对象的多个参考终点的位置信息;其中,不同路口的参考终点不同。
  9. 根据权利要求1所述的方法,其中,所述根据所述候选轨迹集合,确定所述移动对象的目标轨迹,包括:
    确定所述候选轨迹集合中的候选轨迹的置信度;
    根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹。
  10. 根据权利要求9所述的方法,其中,根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹之前,还包括:
    确定所述候选轨迹集合中至少一候选轨迹的轨迹参数修正值;
    根据所述轨迹参数修正值对所述候选轨迹集合中的候选轨迹进行调整,得到更新的候选轨迹集合;
    根据所述移动对象的行驶信息和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的所述目标轨迹。
  11. 根据权利要求9或10所述的方法,其中,根据所述移动对象的行驶信息和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的所述目标轨迹,包括:
    根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域;
    根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹。
  12. 根据权利要求11所述的方法,其中,所述根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域,包括:
    根据所述移动对象的环境信息,确定所述移动对象的预测可行驶区域;
    根据所述移动对象的控制信息,对所述预测可行驶区域进行调整,得到所述可行驶区域。
  13. 根据权利要求11所述的方法,其中,所述根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹,包括:
    确定所述更新的候选轨迹集合中包含于所述可行驶区域的候选轨迹,得到所述待确定目标轨迹集合;
    将所述待确定目标轨迹集合中置信度最大的轨迹或者置信度大于预设置信度阈值的轨迹,确定为所述目标轨迹。
  14. 根据权利要求1至13任一项所述的方法,其中,所述根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合,包括:
    在包含所述参考终点的预设区域内,确定M个估计终点;
    根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合;其中,所述预设距离用于表明所述移动对象的位置信息中最后一个采样点与所述参考终点连线的中点到所述候选轨迹的距离;其中,M和N均为大于0的整数。
  15. 根据权利要求14所述的方法,其中,所述在包含所述参考终点的预设区域内,确定M个估计终点,包括:
    根据所述参考终点所处道路的宽度,确定所述参考终点的预设区域;
    将所述参考终点的预设区域划分为M个预定尺寸的网格,将M个网格的中心作为所述M个估计终点。
  16. 根据权利要求14所述的方法,其中,所述根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合,包括:
    确定所述移动对象的位置信息中的最后一个采样点与所述参考终点连线的中点;
    根据所述N个预设距离和所述中点,确定N个预估计点;
    根据所述N个预估计点和所述M个估计终点,生成M×N个候选轨迹;
    根据所述环境信息,对所述M×N个候选轨迹进行筛选,得到所述候选轨迹集合。
  17. 根据权利要求1至16任一项所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:
    通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点;
    根据所述候选终点确定所述移动对象的参考终点的位置信息。
  18. 根据权利要求17所述的方法,其中,所述通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点,包括:将所述移动对象的位置信息输入第一神经网络以预测所述移动对象的第一候选终点;
    所述根据所述候选终点确定所述移动对象的参考终点的位置信息,包括:根据所述第一候选终点和所述移动对象的环境信息,确定所述移动对象的参考终点的位置信息。
  19. 根据权利要求17或18所述的方法,其中,所述通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点,包括:
    将所述移动对象的位置信息和环境信息输入第二神经网络以预测所述移动对象的第二候选终点;
    所述根据所述候选终点确定所述移动对象的参考终点的位置信息,包括:
    根据所述第二候选终点和所述环境信息,确定所述移动对象的所述参考终点的位置信息。
  20. 根据权利要求17至19任一项所述的方法,其中,所述神经网络的训练方法包括:
    将所述移动对象的位置信息,和/或,所述移动对象的位置信息和移动对象采集的道路图像输入神经网络中,得到第一预测终点;
    根据所述移动对象的真值轨迹,确定所述神经网络关于第一预测终点的第一预测损失;
    根据所述第一预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
  21. 根据权利要求17至20任一项所述的方法,其中,所述神经网络的训练方法包括:
    将所述移动对象的位置信息和所述位置信息对应的地图信息输入所述神经网络中,得到第二预测终点;
    根据所述移动对象的真值轨迹,确定所述神经网络关于所述第二预测终点的第二预测损失;
    确定所述第二预测终点与预设的约束条件之间的偏差;
    根据所述偏差,对所述第二预测终点的第二预测损失进行调整,得到第三预测损失;
    根据所述第三预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
  22. 一种轨迹预测装置,其中,所述装置包括:
    参考终点预测模块,配置为根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;
    候选轨迹确定模块,配置为根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,至少二条所述候选轨迹的终点的位置信息与所述参考终点的位置信息不同;
    目标轨迹确定模块,配置为从所述候选轨迹集合中确定所述移动对象的目标轨迹。
  23. 一种计算机存储介质,其中,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现权利要求1至21任一项所述的方法步骤。
  24. 一种电子设备,其中,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时可实现权利要求1至21任一项所述的方法步骤。
PCT/CN2021/085448 2020-04-10 2021-04-02 轨迹预测方法、装置、设备及存储介质资源 WO2021204092A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022519830A JP7338052B2 (ja) 2020-04-10 2021-04-02 軌跡予測方法、装置、機器及び記憶媒体リソース
US17/703,268 US20220212693A1 (en) 2020-04-10 2022-03-24 Method and apparatus for trajectory prediction, device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010279772.9A CN111523643B (zh) 2020-04-10 2020-04-10 轨迹预测方法、装置、设备及存储介质
CN202010279772.9 2020-04-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/703,268 Continuation US20220212693A1 (en) 2020-04-10 2022-03-24 Method and apparatus for trajectory prediction, device and storage medium

Publications (1)

Publication Number Publication Date
WO2021204092A1 true WO2021204092A1 (zh) 2021-10-14

Family

ID=71902658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/085448 WO2021204092A1 (zh) 2020-04-10 2021-04-02 轨迹预测方法、装置、设备及存储介质资源

Country Status (4)

Country Link
US (1) US20220212693A1 (zh)
JP (1) JP7338052B2 (zh)
CN (1) CN111523643B (zh)
WO (1) WO2021204092A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116543356A (zh) * 2023-07-05 2023-08-04 青岛国际机场集团有限公司 一种轨迹确定方法、设备及介质
CN116723616A (zh) * 2023-08-08 2023-09-08 杭州依森匠能数字科技有限公司 一种灯光亮度控制方法及系统

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523643B (zh) * 2020-04-10 2024-01-05 商汤集团有限公司 轨迹预测方法、装置、设备及存储介质
US11927967B2 (en) * 2020-08-31 2024-03-12 Woven By Toyota, U.S., Inc. Using machine learning models for generating human-like trajectories
CN112212874B (zh) * 2020-11-09 2022-09-16 福建牧月科技有限公司 车辆轨迹预测方法、装置、电子设备及计算机可读介质
CN112197771B (zh) * 2020-12-07 2021-03-19 深圳腾视科技有限公司 车辆失效轨迹重构方法、设备以及存储介质
CN112749740B (zh) * 2020-12-30 2023-04-18 北京优挂信息科技有限公司 确定车辆目的地的方法、装置、电子设备及介质
CN113033364A (zh) * 2021-03-15 2021-06-25 商汤集团有限公司 轨迹预测、行驶控制方法、装置、电子设备及存储介质
CN113119996B (zh) * 2021-03-19 2022-11-08 京东鲲鹏(江苏)科技有限公司 一种轨迹预测方法、装置、电子设备及存储介质
CN112949756B (zh) * 2021-03-30 2022-07-15 北京三快在线科技有限公司 一种模型训练以及轨迹规划的方法及装置
CN113157846A (zh) * 2021-04-27 2021-07-23 商汤集团有限公司 意图及轨迹预测方法、装置、计算设备和存储介质
US20230040006A1 (en) * 2021-08-06 2023-02-09 Waymo Llc Agent trajectory planning using neural networks
CN113447040B (zh) * 2021-08-27 2021-11-16 腾讯科技(深圳)有限公司 行驶轨迹确定方法、装置、设备以及存储介质
CN113568416B (zh) * 2021-09-26 2021-12-24 智道网联科技(北京)有限公司 无人车轨迹规划方法、装置和计算机可读存储介质
CN115230688B (zh) * 2021-12-07 2023-08-25 上海仙途智能科技有限公司 障碍物轨迹预测方法、系统和计算机可读存储介质
CN114312831B (zh) * 2021-12-16 2023-10-03 浙江零跑科技股份有限公司 一种基于空间注意力机制的车辆轨迹预测方法
CN114578808A (zh) * 2022-01-10 2022-06-03 美的集团(上海)有限公司 路径规划方法、电子设备、计算机程序产品及存储介质
CN114954103B (zh) * 2022-05-18 2023-03-24 北京锐士装备科技有限公司 一种应用于无人机的充电方法及装置
CN114913197B (zh) * 2022-07-15 2022-11-11 小米汽车科技有限公司 车辆轨迹预测方法、装置、电子设备及存储介质
CN115790606B (zh) * 2023-01-09 2023-06-27 深圳鹏行智能研究有限公司 轨迹预测方法、装置、机器人及存储介质
CN115861383B (zh) * 2023-02-17 2023-05-16 山西清众科技股份有限公司 一种拥挤空间下多信息融合的行人轨迹预测装置及方法
CN116091894B (zh) * 2023-03-03 2023-07-14 小米汽车科技有限公司 模型训练方法、车辆控制方法、装置、设备、车辆及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809714A (zh) * 2016-03-07 2016-07-27 广东顺德中山大学卡内基梅隆大学国际联合研究院 一种基于轨迹置信度的多目标跟踪方法
CN108803617A (zh) * 2018-07-10 2018-11-13 深圳大学 轨迹预测方法及装置
CN109583151A (zh) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 车辆的行驶轨迹预测方法及装置
CN111523643A (zh) * 2020-04-10 2020-08-11 商汤集团有限公司 轨迹预测方法、装置、设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9989964B2 (en) * 2016-11-03 2018-06-05 Mitsubishi Electric Research Laboratories, Inc. System and method for controlling vehicle using neural network
US10996679B2 (en) 2018-04-17 2021-05-04 Baidu Usa Llc Method to evaluate trajectory candidates for autonomous driving vehicles (ADVs)
CN109272108A (zh) * 2018-08-22 2019-01-25 深圳市亚博智能科技有限公司 基于神经网络算法的移动控制方法、系统和计算机设备
CN109389246B (zh) * 2018-09-13 2021-03-16 中国科学院电子学研究所苏州研究院 一种基于神经网络的交通工具目的地区域范围预测方法
CN109739926B (zh) * 2019-01-09 2021-07-02 南京航空航天大学 一种基于卷积神经网络的移动对象目的地预测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809714A (zh) * 2016-03-07 2016-07-27 广东顺德中山大学卡内基梅隆大学国际联合研究院 一种基于轨迹置信度的多目标跟踪方法
CN108803617A (zh) * 2018-07-10 2018-11-13 深圳大学 轨迹预测方法及装置
CN109583151A (zh) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 车辆的行驶轨迹预测方法及装置
CN111523643A (zh) * 2020-04-10 2020-08-11 商汤集团有限公司 轨迹预测方法、装置、设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116543356A (zh) * 2023-07-05 2023-08-04 青岛国际机场集团有限公司 一种轨迹确定方法、设备及介质
CN116543356B (zh) * 2023-07-05 2023-10-27 青岛国际机场集团有限公司 一种轨迹确定方法、设备及介质
CN116723616A (zh) * 2023-08-08 2023-09-08 杭州依森匠能数字科技有限公司 一种灯光亮度控制方法及系统
CN116723616B (zh) * 2023-08-08 2023-11-07 杭州依森匠能数字科技有限公司 一种灯光亮度控制方法及系统

Also Published As

Publication number Publication date
US20220212693A1 (en) 2022-07-07
CN111523643A (zh) 2020-08-11
JP2022549952A (ja) 2022-11-29
JP7338052B2 (ja) 2023-09-04
CN111523643B (zh) 2024-01-05

Similar Documents

Publication Publication Date Title
WO2021204092A1 (zh) 轨迹预测方法、装置、设备及存储介质资源
US11897518B2 (en) Systems and methods for navigating with sensing uncertainty
US11815904B2 (en) Trajectory selection for an autonomous vehicle
US11835950B2 (en) Autonomous vehicle safe stop
US20230347877A1 (en) Navigation Based on Detected Size of Occlusion Zones
US11868136B2 (en) Geolocalized models for perception, prediction, or planning
US20210339741A1 (en) Constraining vehicle operation based on uncertainty in perception and/or prediction
US10234864B2 (en) Planning for unknown objects by an autonomous vehicle
US11565716B2 (en) Method and system for dynamically curating autonomous vehicle policies
US11458991B2 (en) Systems and methods for optimizing trajectory planner based on human driving behaviors
US20210406559A1 (en) Systems and methods for effecting map layer updates based on collected sensor data
US20220105959A1 (en) Methods and systems for predicting actions of an object by an autonomous vehicle to determine feasible paths through a conflicted area
KR20150128712A (ko) 차량 라우팅 및 교통 관리를 위한 차선 레벨 차량 내비게이션
CN114072841A (zh) 根据图像使深度精准化
KR20220136006A (ko) 자율 주행 차량의 성능을 평가하기 위한 테스트 시나리오의 선택
US20220161830A1 (en) Dynamic Scene Representation
WO2022151666A1 (en) Systems, methods, and media for evaluation of trajectories and selection of a trajectory for a vehicle
US11465620B1 (en) Lane generation
US20220309521A1 (en) Computing a vehicle interest index
EP3454269A1 (en) Planning autonomous motion
US20240132112A1 (en) Path-based trajectory prediction
US20210405641A1 (en) Detecting positioning of a sensor system associated with a vehicle
US20240028035A1 (en) Planning autonomous motion
US20230154317A1 (en) Method for controlling a mobility system, data processing device, computer-readable medium, and system for controlling a mobility system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21785278

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022519830

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21785278

Country of ref document: EP

Kind code of ref document: A1