WO2021204092A1 - 轨迹预测方法、装置、设备及存储介质资源 - Google Patents
轨迹预测方法、装置、设备及存储介质资源 Download PDFInfo
- Publication number
- WO2021204092A1 WO2021204092A1 PCT/CN2021/085448 CN2021085448W WO2021204092A1 WO 2021204092 A1 WO2021204092 A1 WO 2021204092A1 CN 2021085448 W CN2021085448 W CN 2021085448W WO 2021204092 A1 WO2021204092 A1 WO 2021204092A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- moving object
- trajectory
- candidate
- end point
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 135
- 238000013528 artificial neural network Methods 0.000 claims description 118
- 230000007613 environmental effect Effects 0.000 claims description 40
- 238000005070 sampling Methods 0.000 claims description 26
- 238000012937 correction Methods 0.000 claims description 22
- 238000004891 communication Methods 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 4
- 230000033001 locomotion Effects 0.000 abstract description 10
- 230000008569 process Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 20
- 238000010801 machine learning Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 5
- 238000012216 screening Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0011—Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3453—Special cost functions, i.e. other than distance or default speed limit of road segments
- G01C21/3492—Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo or light sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2552/00—Input parameters relating to infrastructure
- B60W2552/05—Type of road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/402—Type
- B60W2554/4029—Pedestrians
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2555/00—Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
- B60W2555/60—Traffic rules, e.g. speed limits or right of way
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/45—External transmission of data to or from the vehicle
Definitions
- the embodiments of the present disclosure relate to the field of automatic driving technology, and relate to, but are not limited to, a trajectory prediction method, device, equipment, and storage medium.
- the embodiments of the present disclosure provide a trajectory prediction method, device, equipment, and storage medium.
- the embodiment of the present disclosure provides a trajectory prediction method, which is applied to an electronic device, and the method includes:
- a candidate trajectory set including a plurality of candidate trajectories is determined; wherein the position information of the end points of at least two of the candidate trajectories is different from the position information of the reference end point ;
- the target trajectory of the moving object is determined from the set of candidate trajectories.
- the location information of the moving object includes: time-series location information of the moving object, or the historical trajectory of the moving object.
- the candidate trajectory of the moving object can be predicted based on the historical trajectory and time-series position information of the moving object.
- the reference end point includes points other than a preset restriction type, where the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians.
- the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians.
- the determining the location information of the reference end point of the moving object according to the location information of the moving object includes: acquiring the environment information of the moving object according to the location information of the moving object, and the environment information It includes at least one of the following: road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information; and determining the position information of the reference end point of the moving object according to the environmental information. In this way, by combining the environment information around the moving object with the position information of the moving object, the position information of the reference end point of the moving object can be accurately predicted.
- the acquiring environmental information of the mobile object according to the position information of the mobile object includes: determining the environmental information according to the image information collected by the mobile object; and/or, according to the The mobile object receives the communication information that characterizes the current environment, and determines the environment information. In this way, by analyzing the communication information and image information of the moving object, it is possible to exclude points included in the preset restriction type from the reference end point, thereby obtaining the reference end point of the moving object.
- the determining the location information of the reference end point of the moving object according to the location information of the moving object includes: determining at least one reference route of the moving object according to the location information of the moving object; The reference route determines the location information of the reference destination. In this way, the accuracy of the predicted reference end point can be improved.
- the determining the location information of the reference end point according to the reference route includes: determining a drivable area of the reference route; and determining that the moving object is at The position information of the reference end point in the drivable area. In this way, the effectiveness of the drivable area on each reference route is improved.
- the determining the location information of the reference end point of the moving object according to the location information of the moving object includes: determining the intersection information of the road where the moving object is located according to the location information of the moving object; responding Where the intersection information indicates that there are at least two intersections, the position information of multiple reference end points of the moving object is determined; wherein, the reference end points of different intersections are different. In this way, missing reference endpoints can be reduced, thereby improving the accuracy of the determined target trajectory.
- the determining the target trajectory of the moving object according to the candidate trajectory set includes: determining the confidence of the candidate trajectory in the candidate trajectory set; The confidence level determines the target trajectory of the moving object from the candidate trajectory set. In this way, an optimal trajectory is selected from the multiple candidate trajectories as the target trajectory for the moving object to travel, so as to more accurately estimate the future motion trajectory of the moving object.
- the method before determining the target trajectory of the moving object from the candidate trajectory set according to the driving information of the moving object and the confidence level, the method further includes: determining that at least the candidate trajectory set is at least A trajectory parameter correction value of a candidate trajectory; adjusting the candidate trajectories in the candidate trajectory set according to the trajectory parameter correction value to obtain an updated candidate trajectory set; according to the driving information of the moving object and the confidence level, The target trajectory of the moving object is determined from the updated set of candidate trajectories. In this way, the candidate trajectory is adjusted according to the trajectory parameter correction value to improve the rationality of the obtained target trajectory.
- determining the target trajectory of the moving object from the updated set of candidate trajectories according to the driving information of the moving object and the confidence level includes: according to the environmental information of the moving object and/ Or the control information of the moving object determines the drivable area of the moving object; according to the drivable area and the confidence, the target trajectory of the moving object is determined from the updated set of candidate trajectories. In this way, the purpose of screening candidate trajectories can be achieved.
- the determining the drivable area of the mobile object according to the environmental information of the mobile object and/or the control information of the mobile object includes: determining the travelable area of the mobile object according to the environmental information of the mobile object.
- the predicted drivable area of the moving object; according to the control information of the moving object, the predicted drivable area is adjusted to obtain the drivable area. In this way, the predicted drivable area is reduced by the control information of the moving object, and a more accurate drivable area is obtained.
- determining the target trajectory of the moving object from the updated candidate trajectory set according to the drivable area and the confidence level includes: determining that the updated candidate trajectory set is included in the The candidate trajectories of the drivable area obtain the set of target trajectories to be determined; the trajectory with the highest confidence level or the trajectory with a confidence level greater than a preset confidence threshold in the set of target trajectories to be determined is determined as the target trajectory. In this way, the candidate trajectory with the greatest confidence in the determined target trajectory set is used as the target trajectory, which sufficiently improves the accuracy of the predicted target trajectory of the vehicle.
- the determining a candidate trajectory set including a plurality of candidate trajectories according to the position information of the moving object and the position information of the reference end point includes: in a preset area including the reference end point, Determine M estimated end points; according to the position information of the moving object, the M estimated end points and N preset distances, correspondingly generate M ⁇ N candidate trajectories to obtain the candidate trajectory set; wherein, the preset The distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object to the candidate trajectory; wherein, M and N are both integers greater than 0. In this way, through multiple pre-estimated points and estimated end points, multiple candidate trajectories can be fitted.
- the determining M estimated end points in the preset area including the reference end point includes: determining the preset area of the reference end point according to the width of the road where the reference end point is located; The preset area of the reference endpoint is divided into M grids of predetermined size, and the centers of the M grids are used as the M estimated endpoints. In this way, using the centers of the M grids as the M estimated end points can improve the accuracy of predicting the possible end points of the candidate trajectory.
- the step of correspondingly generating M ⁇ N candidate trajectories according to the position information of the moving object, the M estimated end points and N preset distances to obtain the candidate trajectory set includes: determining all the candidate trajectories. The midpoint between the last sampling point in the position information of the moving object and the reference end point; N pre-estimation points are determined according to the N preset distances and the midpoint; according to the N pre-estimations The estimated points and the M estimated end points are generated to generate M ⁇ N candidate trajectories; according to the environmental information, the M ⁇ N candidate trajectories are screened to obtain the candidate trajectory set. In this way, by setting constraint conditions, the trajectories that do not meet the constraint conditions among the M ⁇ N candidate trajectories are eliminated, and a more accurate set of candidate trajectories is obtained.
- determining the position information of the reference end point of the moving object according to the position information of the moving object includes: predicting the candidate end point of the moving object according to the position information of the moving object through a neural network; The candidate end point determines the position information of the reference end point of the moving object. In this way, using a trained neural network to predict the reference endpoint of a moving object can not only improve the accuracy of the prediction, but also speed up the prediction.
- predicting the candidate end point of the moving object based on the location information of the moving object through a neural network includes: inputting the location information of the moving object into a first neural network to predict the first location of the moving object.
- Candidate end point; the determining the position information of the reference end point of the moving object according to the candidate end point includes: determining the position of the reference end point of the moving object according to the first candidate end point and the environment information of the moving object information. In this way, the predicted first candidate end point that overlaps with the pedestrian or the sidewalk or exceeds the edge of the road is adjusted to obtain the position information of the reference end point with higher accuracy.
- predicting the candidate destination of the mobile object based on the position information of the mobile object through a neural network includes: inputting the position information and environment information of the mobile object into a second neural network to predict the mobile object The second candidate end point of the; said determining the location information of the reference end point of the moving object according to the candidate end point includes: determining the reference end point of the moving object according to the second candidate end point and the environment information Location information.
- the neural network training method includes: inputting the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain the first Predict the end point; according to the true value trajectory of the moving object, determine the first prediction loss of the neural network with respect to the first prediction end point; according to the first prediction loss, adjust the network parameters of the neural network for training The neural network.
- the first prediction loss is used to adjust parameters such as the weight of the neural network, so that the adjusted neural network classification result is more accurate.
- the training method of the neural network includes: inputting the location information of the moving object and the map information corresponding to the location information into the neural network to obtain a second predicted end point; Determine the second prediction loss of the neural network with respect to the second prediction end point; determine the deviation between the second prediction end point and the preset constraint condition; according to the deviation, determine the second prediction loss of the second prediction end point; The second prediction loss of the second prediction endpoint is adjusted to obtain the third prediction loss; and the network parameters of the neural network are adjusted according to the third prediction loss to train the neural network. In this way, the accuracy of the target trajectory output by the neural network is higher.
- An embodiment of the present disclosure provides a trajectory prediction device, the device includes: a reference end point prediction module configured to determine the position information of the reference end point of the moving object according to the position information of the moving object; the candidate trajectory determination module is configured to The position information of the moving object and the position information of the reference end point determine a candidate trajectory set including a plurality of candidate trajectories; wherein the position information of the end points of at least two of the candidate trajectories is different from the position information of the reference end point; The target trajectory determining module is configured to determine the target trajectory of the moving object from the set of candidate trajectories.
- an embodiment of the present disclosure provides a computer storage medium having computer-executable instructions stored thereon, and the computer-executable instructions can implement the above-mentioned method steps after being executed.
- the embodiments of the present disclosure provide a computer device, the computer device includes a memory and a processor, the memory is stored with computer executable instructions, and the processor can implement the above-mentioned instructions when the processor runs the computer executable instructions on the memory. The method steps described.
- the embodiments of the present disclosure provide a computer program, including computer-readable code, and when the computer-readable code is run in an electronic device, the processor in the electronic device is configured to implement the trajectory described in any one of the above items. method of prediction.
- the embodiments of the present disclosure provide a method, device, equipment, and storage medium for trajectory prediction.
- the reference end point of the moving object is predicted according to the position information of the current position of the moving object; then, according to the reference end point and the historical trajectory , Determine the candidate trajectory set consisting of multiple candidate trajectories of the moving object; finally, determine the target trajectory of the moving object from the candidate trajectory set; in this way, by considering the position information of the moving object, predict the reference end point of the moving object, and infer There are multiple candidate trajectories that the moving object may travel, and an optimal trajectory is selected from the multiple candidate trajectories as the target trajectory for the moving object to travel, so as to more accurately estimate the future motion trajectory of the moving object.
- FIG. 1A is a schematic diagram of a system architecture to which a trajectory prediction method according to an embodiment of the present disclosure can be applied;
- FIG. 1B is a schematic diagram of an implementation process of a trajectory prediction method according to an embodiment of the disclosure.
- FIG. 1C is a schematic diagram of an implementation process of a trajectory prediction method according to an embodiment of the present disclosure
- FIG. 2A is a schematic diagram of another implementation process of a trajectory prediction method according to an embodiment of the present disclosure
- 2B is a schematic diagram of the implementation process of a neural network training method according to an embodiment of the disclosure.
- 3A is a schematic diagram of the implementation structure of a candidate trajectory network according to an embodiment of the disclosure.
- 3B is a schematic diagram of the implementation structure of a candidate trajectory network according to an embodiment of the disclosure.
- 4A is a schematic structural diagram of candidate trajectory generation in an embodiment of the disclosure.
- FIG. 4B is a schematic diagram of a process of generating candidate trajectories on multiple reference routes according to an embodiment of the present disclosure
- FIG. 5 is a schematic diagram of the structural composition of a trajectory prediction device according to an embodiment of the disclosure.
- FIG. 6 is a schematic diagram of the composition structure of a computer device according to an embodiment of the disclosure.
- This embodiment proposes a method for trajectory prediction to be applied to computer equipment.
- the computer equipment may include moving objects or non-moving objects.
- the functions implemented by this method can be implemented by invoking the program code by the processor in the computer equipment.
- the program code It can be stored in a computer storage medium. It can be seen that the computer device at least includes a processor and a storage medium.
- FIG. 1A is a schematic diagram of a system architecture to which the trajectory prediction method of the embodiment of the present disclosure can be applied; as shown in FIG. 1A, the system architecture includes: a vehicle terminal 131, a network 132 and a trajectory prediction terminal 133.
- the vehicle terminal 131 and the trajectory prediction terminal 133 can establish a communication connection through the network 132, and the vehicle terminal 131 reports location information to the trajectory prediction terminal 133 through the network 202 (or, the trajectory prediction terminal 133 automatically obtains the vehicle terminal 131).
- the trajectory prediction terminal 133 determines the position information of the reference end point of the vehicle; then, through the position information of the vehicle and the position information of the reference end point, multiple candidate trajectories are predicted; finally, The trajectory prediction terminal 133 selects the target trajectory of the vehicle from the multiple candidate trajectories.
- the vehicle terminal 131 may include an in-vehicle image acquisition device, and the trajectory prediction terminal 133 may include an in-vehicle vision processing device or a remote server with visual information processing capabilities.
- the network 132 may adopt a wired connection or a wireless connection.
- the vehicle terminal 131 in the case that the trajectory prediction terminal 133 is a vehicle-mounted visual processing device, the vehicle terminal 131 can communicate with the vehicle-mounted visual processing device through a wired connection, such as data communication through a bus; in the case of a remote server at the abnormal sitting posture recognition terminal Next, the vehicle terminal can interact with the remote server through the wireless network.
- the vehicle terminal 131 may be a vehicle-mounted visual processing device with a vehicle-mounted image acquisition module, and is specifically implemented as a vehicle-mounted host with a camera.
- the trajectory prediction method of the embodiment of the present disclosure may be executed by the vehicle terminal 131, and the foregoing system architecture may not include a network and a trajectory prediction terminal.
- FIG. 1B is a schematic diagram of the implementation process of the trajectory prediction method according to the embodiment of the present disclosure, as shown in FIG. 1B, combined with the method shown in FIG. 1B for description:
- Step S101 Determine the position information of the reference end point of the moving object according to the position information of the moving object.
- the moving objects include: vehicles with various functions, vehicles with various rounds, etc., robots, aircraft, blind guides, smart furniture equipment or smart toys, etc. Let’s take a vehicle as an example for illustration.
- the reference end point includes points outside the preset restriction type, where the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians. That is, the reference destination does not include points on the edge of the road, points on the road where there are obstacles, points on the road where there are pedestrians, and so on.
- the location information of the moving object includes: time sequence location information of the moving object, or the historical trajectory of the moving object. Determining the reference end point can be achieved in the following ways.
- the reference end point is predicted by the location information of the moving object; or, using the road coded image as the network input, combined with mobile
- the position information of the object predicts the reference end point; then, by constraining the predicted reference end point, for example, it is realized to set a specific area, and only points that fall in the specific area can be determined as the reference end point.
- the method of determining the reference end point in step S101 may be any method, as long as it is a method determined based on the position information of the moving object.
- a method of determining an end point by location information and a machine learning model such as feature learning or reinforcement learning can be used.
- the neural network can input at least the position information of the moving object and output the end point or the trajectory including the end point.
- the end point can be determined by using the location information and environmental information around the moving object as the input of the neural network.
- the position information of the moving object can be used as the input of the neural network, and the output of the neural network and the environmental information around the moving object can be used to determine the end point.
- the location information and environmental information around the moving object can be used as the input of the neural network, and the output of the neural network and the environmental information around the moving object can be used to determine the reference end point.
- the trajectory of the moving object can be determined as the output of the neural network, and the determined candidate trajectory can be adjusted based on the environmental information around the moving object so that the candidate trajectory does not overlap with pedestrians or sidewalks, and the optimized candidate trajectory is determined Reference end point.
- a method using position information and a kinematic model of the moving object can be adopted.
- the position information, the motion model of the moving object, and the environment information around the moving object can be used to determine the reference end point.
- the step S101 can also be implemented in the following two ways. One is: first, sample the position information of the vehicle (for example, historical trajectory) to obtain a set of sampling points; then, use the The preset neural network performs feature extraction on the sampling set; finally, the extracted features are input into the fully connected layer of the preset neural network to obtain the reference end point.
- the second is: sampling the location information of the vehicle, combined with the current running speed of the vehicle, can predict the reference end point of the vehicle within the preset time period.
- the location information may be the vehicle's trajectory in a preset time period with the current time as the end of the time, for example, the current time as the end, the trajectory within 3 seconds; then, the step length is 0.3 seconds , The historical trajectory within 3 seconds is sampled, and finally, the obtained sampling point is used as a priori information to predict the reference end point of the moving object.
- Step S102 Determine a candidate trajectory set including multiple candidate trajectories according to the position information of the moving object and the position information of the reference end point.
- the position information of the end points of at least two of the candidate trajectories is different from the position information of the reference end point. That is to say, the candidate trajectory set includes some candidate trajectories (for example, a candidate trajectory) whose terminal position information is the same as the position information of the reference terminal, and also includes some candidate trajectories whose terminal position information is different from the position information of the reference terminal; In this way, determining multiple candidate trajectories within the tolerance range not only makes the determined multiple candidate trajectories reasonable, but also enriches the diversity of candidate trajectories, and can then filter out target trajectories from the rich candidate trajectories, and improve The accuracy of the predicted target trajectory is improved.
- some candidate trajectories for example, a candidate trajectory
- the above-mentioned step S102 can be implemented by the following process: First, M estimated end points are determined in a preset area including the reference end point; the M pre-estimated end points include the reference end point. Then, according to the historical running trajectories, M estimated end points, and N preset distances, M ⁇ N candidate trajectories are generated correspondingly to obtain the candidate trajectory set; wherein, the preset distance is used to indicate the historical running trajectories The distance from the midpoint of the line between the last sampling point and the end point to the candidate trajectory; M and N are both integers greater than 0.
- the candidate trajectory set is a curve set containing multiple candidate trajectories.
- the trajectory parameters of the candidate trajectory include: the coordinates of the estimated end point and the distance between the midpoint of the candidate trajectory and the line between the last sampling point of the historical trajectory and the estimated end point.
- the correction value of the trajectory parameter is the correction value of the estimated coordinates and distance of the end point, and the curve shape of the candidate trajectory is adjusted based on the correction value to make the adjusted candidate trajectory more reasonable.
- Step S103 Determine the target trajectory of the moving object from the set of candidate trajectories.
- a target trajectory is selected from a plurality of candidate trajectories according to the confidence level and driving information of each candidate trajectory, so that the moving object travels along the target trajectory.
- the discrete coordinate points corresponding to the future time series of output vehicles in the related art can be effectively solved, and the use of discrete coordinate points to represent the future trajectory of the vehicle is difficult to reflect the future driving trend of the vehicle, which has little effect in practical applications.
- By predicting the reference end point of the moving object infer the multiple candidate trajectories that the moving object may travel, and select an optimal trajectory from the multiple candidate trajectories as the target trajectory of the moving object, thereby more accurately estimating the movement The future trajectory of the object.
- the location information of the reference end point of the moving object can be predicted by combining the environment information around the moving object and the location information of the moving object, which can be achieved through the following steps:
- the environment information of the mobile object is acquired according to the position information of the mobile object.
- the environmental information around the historical trajectory is obtained. For example, acquiring road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, or other moving object information around the historical track; where road information includes at least: current road conditions (for example, congestion conditions) ), road width and road intersection information (for example, whether it is an intersection), etc.; obstacle information includes: whether there are roadblocks on the current road, or other obstacles, etc.; pedestrian information includes whether there are pedestrians on the road and the location of pedestrians; Traffic light information includes at least: the number of traffic lights installed on the road and whether the traffic lights are working normally; traffic sign information includes: the type and duration of the lights that the traffic lights are currently on; traffic regulation information includes at least: the current road is to the right Whether to drive on the left, one-way or two-way, the types of vehicles that the road can pass, etc.
- obtaining the environment information of the mobile object according to the location information of the mobile object can be implemented in the following
- Manner 1 Determine the environmental information according to the image information collected by the moving object.
- first collect images around the historical trajectory of the moving object through the camera configured on the moving object for example, image acquisition of the environment around the moving object
- obtain image information for example, image acquisition of the environment around the moving object
- obtain the environment around the moving object by analyzing the image content information.
- the road information, obstacle information, pedestrian information, traffic light information, etc. of the moving object are learned, and the position information of the possible reference destination of the moving object can be predicted through comprehensive analysis of these information.
- the possible reference end points are excluded from the points included in the preset restriction type, so as to obtain the reference end point of the moving object.
- Manner 2 Determine the environment information according to the communication information that characterizes the current environment received by the mobile object.
- a mobile object uses a communication device to receive communication information that characterizes the current environment sent by other devices, and obtains environmental information by analyzing the communication information; wherein the communication information includes at least the environmental parameters of the location of the mobile device; for example, road information , Obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rules information or other moving object information, etc.
- the location information of the reference end point of the moving object is determined according to the environment information.
- the position information of the reference end point of the moving object can be determined through the following steps:
- the first step is to determine the intersection information of the road where the moving object is located according to the position information of the moving object.
- the intersection information of the road ahead that the moving object continues to run along the historical trajectory is judged, where the intersection information includes: the number of intersections, the intersection of the intersection, and so on.
- position information of multiple reference end points of the moving object is determined.
- the reference end point on the road corresponding to each intersection is determined respectively, that is, the position information of the possible reference end point is predicted on the road corresponding to each intersection, And the reference end points of different intersections are different.
- the intersection of the road where the moving object is located is an intersection. First, determine the three intersections in the intersection except the direction opposite to the direction of the moving object, and then predict the position information of the reference end points on the road corresponding to the three intersections. . In this way, multiple reference end points are predicted, and then the target end point with the greatest confidence is selected from the multiple reference end points, so that the missed reference end points are reduced, and the accuracy of the determined target trajectory is improved.
- the step S101 may be implemented through the following steps:
- At least one reference route of the moving object is determined according to the position information of the moving object.
- the current road conditions included in the location information of the moving object and whether it is at an intersection are input into the neural network, and multiple reference routes are predicted. For example, if the location information indicates that the moving object is on a straight one-way road, then there is one reference route, which is a route on the one-way road in the moving direction of the moving object; if the location information indicates that the moving object is at a T-shaped intersection, then there are three reference routes. Each is the route on each road in the T-junction along the moving direction of the moving object. If the location information indicates that the moving object is at an intersection, then there are four reference routes, which are the routes on each road in the intersection along the moving direction of the moving object. In this way, combined with the position information of the moving object, a number of reference routes that the moving object may travel may be predicted by comprehensive consideration.
- the second step is to determine the location information of the reference destination according to the reference route.
- a most likely future driving route of the moving object is determined, and on the reference route, the position information of the reference end point of the moving object is determined.
- the roadblock information and road edges of the reference route For example, to determine the roadblock information and road edges of the reference route.
- determine the obstacles on each reference route such as pedestrians, faulty vehicles, or roadblocks.
- the drivable area of the reference route is determined according to the roadblock information and the road edge of the reference route.
- the drivable area on the reference route is delineated.
- the area within the road edge of the reference route and without roadblocks on the road is regarded as the drivable area, so Improved the effectiveness of the drivable area on each reference route.
- the position information of the reference end point of the moving object in the drivable area is determined.
- it may be based on the historical trajectory of the moving object to predict the reference end point of the moving object in the drivable area of the reference route.
- the reference end point of the reference road in the drivable area is predicted based on the historical trajectory of the moving object.
- the above provides a way to predict the end point of the running trajectory.
- the network predicts the end point of the trajectory, it generates candidate trajectories based on the end point of the first step.
- Each candidate trajectory corresponds to the end point, and the end point of the candidate trajectory cannot Beyond the road, and not in places with obstacles (such as pedestrians), the effectiveness of the predicted reference end point is improved.
- a set of candidate trajectories on the reference route is determined.
- the candidate trajectory set on the reference route is obtained; in this way, the candidate trajectory set on each reference route is obtained, and the candidate trajectory set on each reference route is obtained from the at least one reference From the set of candidate trajectories on the route, the target trajectory of the moving object is determined.
- the candidate trajectory that finally meets the constraint conditions is determined as the most likely driving trajectory of the moving object, that is, the target driving trajectory.
- the embodiment of the present disclosure provides a trajectory prediction method, which is applied to a moving object, such as a vehicle, as an example.
- a moving object such as a vehicle
- the method shown in Figure 1C is explained:
- Step S111 Predict the reference end point of the moving object according to the historical trajectory of the moving object.
- Step S112 Determine M estimated end points within the preset area of the included reference end points.
- M estimated end points are determined within a preset area of the reference route that includes the reference end point.
- the preset area is the area around the reference end point, for example, a square with a side length of 100m with the reference end point as the center, and then the square is divided into a plurality of square grids according to a step length of 5m.
- the center of the grid is the estimated end point.
- the area of the road surface within the road edge of the reference route that contains the reference end point is taken as the preset area; in a specific example , The road width is 4 meters, and the area including the reference end point of 4 meters in width and 100 meters in length is taken as the preset area; then, the preset area of the reference end point is divided into M grids of predetermined size, The centers of the M grids are used as the M estimated end points. For example, using grids of the same size, set the grid size to 10 cm. In this way, taking the centers of the M grids as the M estimated end points, that is, the possible end points of the candidate trajectory, the possible end points of the candidate trajectory on each reference road are obtained.
- Step S113 According to the position information of the moving object, the M estimated end points and N preset distances, M ⁇ N candidate trajectories are correspondingly generated to obtain the candidate trajectory set.
- the preset distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object to the candidate trajectory; where M and N is an integer greater than zero.
- firstly determine the midpoint between the last sampling point in the position information of the moving object and the reference end point; secondly, determine the midpoint based on the N preset distances and the midpoint N pre-estimated points; the estimated point is a point on the candidate trajectory, because the preset distance is the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object (for example, historical trajectory)
- N preset distances correspond to N Pre-estimation points.
- M ⁇ N candidate trajectories are generated; that is, based on N pre-estimated points and one estimated end point, N candidate trajectories can be fitted by fitting, Then based on N pre-estimated points and M estimated end points, M ⁇ N candidate trajectories can be fitted.
- the M ⁇ N candidate trajectories are screened to obtain the candidate trajectory set. Among them, these environmental information can be obtained from the image. For example, if an obstacle is detected in the image, the candidate trajectory cannot pass through the obstacle. You can also get road information in the image, or use it when setting candidate trajectories.
- candidate trajectories When generating candidate trajectories, consider surrounding environment information, set constraint conditions, and eliminate trajectories that do not meet the constraint conditions among the M ⁇ N candidate trajectories to obtain a set of candidate trajectories; for example, eliminate candidate trajectories that pass through obstacles.
- the above steps S112 and S113 give a way to "determine a set of candidate trajectories consisting of multiple candidate trajectories of the moving object". In this way, multiple possible candidates are determined around the reference end point. The estimated end point of the trajectory end point, and then based on the estimated end point and the preset distance, multiple selected trajectories are obtained by fitting. In this way, the use of curve representation to predict the target trajectory of the vehicle can reflect the trend of the trajectory and is more robust to noise. Strong scalability.
- Step S114 Determine the trajectory parameter correction value of at least one candidate trajectory in the candidate trajectory set.
- the step S114 can output the trajectory parameter correction value of each candidate trajectory based on the trained neural network, and can also use but not limited to the neural network trained by the training method mentioned below to output the trajectory parameter.
- the trajectory parameters may include parameters used to describe the trajectory curve.
- the trajectory parameters may include, but are not limited to, the coordinates of the end points of the trajectory curve, and/or the connection between the midpoint of the trajectory curve and the two end points of the trajectory curve. The distance between the lines and so on.
- the candidate trajectory is adjusted according to the trajectory parameter correction value to improve the rationality of the obtained target trajectory.
- the correction value may include, but is not limited to: the adjustment value of the coordinates of the endpoint of the trajectory curve, and/or the trajectory The adjustment value of the distance between the midpoint of the curve and the line connecting the two end points of the trajectory curve.
- the trajectory parameter correction value may be determined by a neural network trained in an embodiment of the present disclosure, or may be determined by a neural network trained in other ways.
- Step S115 Adjust the candidate trajectories in the candidate trajectory set according to the trajectory parameter correction value to obtain an updated candidate trajectory set.
- each candidate in the set of candidate trajectories is modified to obtain multiple modified candidate trajectories, that is, an updated set of candidate trajectories.
- the candidate trajectory is corrected based on the correction value output by the trained neural network, thereby improving the accuracy of the candidate trajectory in the updated candidate set.
- Step S116 Determine the target trajectory of the moving object from the candidate trajectory set according to the driving information of the moving object and the confidence level.
- the target trajectory is filtered from the updated candidate trajectory set and the driving information of the moving object at least includes road surface information of the moving object and/or control information of the moving object; for example,
- the road surface information includes: the width of the road surface, the edge of the road surface and the center line on the road surface, etc.
- the control information of the moving object includes: driving direction, driving speed, and state of vehicle lights (for example, the state of turn signals) and so on.
- the predicted drivable area of the moving object is determined; the drivable area is shown in FIG. 3A, the drivable area 46 of the vehicle.
- the road surface information includes at least whether the road is in the same direction, the width of the road surface, and the intersection of the road surface; for example, the road surface information indicates that the road section is in the same direction and is not an intersection, then the predicted maximum travelable area is this The area in front of the vehicle that covers the entire road, that is, the area is one-way; if the road surface information indicates that the road section is an intersection, then the predicted drivable area is the area around the vehicle that covers the entire road, that is, the area includes the intersection Three directions (turn left, go straight and turn right).
- the road surface information indicates that the road section is an intersection
- the preset drivable area includes the three directions of the intersection (turn left, go straight and turn right)
- the control information indicates that the vehicle wants to turn left
- predict the coverage that can be driven The area can be reduced from covering the three directions (turning left, going straight and turning right) to only covering the direction of turning left, so that the coverage area of the drivable area can be further accurate, and the final target trajectory of the vehicle can be determined more accurately.
- the second step is to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the drivable area and the confidence level.
- the candidate trajectories included in the drivable area in the updated candidate trajectory set to obtain the to-be-determined target trajectory set; then, the confidence in the to-be-determined target trajectory set is greater than a preset confidence threshold
- the trajectory of is determined as the target trajectory; for example, the candidate trajectory with the greatest confidence in the determined target trajectory set is used as the target trajectory, which fully improves the accuracy of the predicted target trajectory of the vehicle.
- the historical trajectory is first predicted to the candidate trajectory set, and then further narrowing down the drivable area to which the candidate trajectory should belong according to the control information and the road surface, so as to achieve the purpose of screening candidate trajectories.
- step S116 can be implemented through the following steps:
- Step S161 Determine the confidence level of the candidate trajectory in the candidate trajectory set.
- the confidence level is used to indicate the probability that the candidate trajectory is the target trajectory.
- the confidence level may be determined by a neural network trained in an embodiment of the present disclosure, or may be determined by a neural network trained in other ways.
- Step S162 Determine the target trajectory of the moving object from the candidate trajectory set according to the driving information of the moving object and the confidence level.
- one or a combination of the road surface information of the moving object and the control information of the moving object or a combination of the two is used as a priori information to modify the candidate trajectory, so that the final target trajectory is more reasonable .
- the control information of the moving object may include but is not limited to at least one of the following, the running state of the engine, steering wheel steering information, or speed control information (such as deceleration, acceleration, or braking).
- the step S162 can be implemented through the following two steps:
- the first step is to determine the travelable area (Freespace) of the mobile object according to the environmental information of the mobile object and/or the control information of the mobile object.
- Freespace the travelable area
- the environmental information of the moving object may be road surface information
- determining the drivable area of the moving object includes the following multiple methods: Method 1: Determine the drivable area of the moving object according to the road surface information of the road where the moving object is located; Method 2. : Determine the drivable area of the mobile object according to the control information of the mobile object; Method 3: Determine the drivable area of the mobile object according to the road surface information of the road where the mobile object is located and/or the control information of the mobile object area.
- the road surface information refers to the road surface information on which the vehicle is currently running
- the control information refers to the condition of the vehicle lights at the time corresponding to the collection of the historical trajectory of the vehicle.
- the vehicle lights indicate a right turn
- the control information is a right turn, so as to determine that the vehicle's drivable area is the road area corresponding to the right turn.
- the drivable area may be understood as an area for a moving object to travel, for example, a barrier-free and allowable road area.
- the predicted drivable area of the moving object is determined.
- the drivable area is shown in FIG. 3A, the drivable area 46 of the vehicle.
- the road surface information may include, but is not limited to, at least one of the following: whether the road surface is the same lane, the width of the road surface, and the intersection of the road surface; for example, the road surface information indicates that the road section is the same lane and not an intersection, then
- the maximum predicted drivable area is the area in front of the vehicle that covers the entire road, that is, the area is one-way; if the road surface information indicates that the road section is an intersection, the maximum predicted drivable area is the area around the vehicle that covers the entire road, namely This area is the three directions (turn left, go straight and turn right) containing the intersection.
- the predicted drivable area is adjusted to obtain the drivable area; for example, the surface information indicates that the road section is an intersection, and the preset drivable area is the three that includes the intersection. Three directions (turn left, go straight and turn right), but the control information indicates that the vehicle is going to turn left, then the predicted coverage area can be reduced from covering the three directions (turn left, go straight and turn right) to cover only the left turn Direction, so that the coverage area of the drivable area can be further accurate, so that the final target trajectory of the vehicle can be determined more accurately.
- the second step is to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the drivable area and the confidence level.
- the candidate trajectories in the updated candidate trajectory set are filtered to obtain the target trajectory.
- the trajectory of is determined as the target trajectory; for example, the candidate trajectory with the greatest confidence in the determined target trajectory set is used as the target trajectory, which fully improves the accuracy of the predicted target trajectory of the vehicle.
- the current time as the end of the time, and obtain the trajectory of the vehicle within a preset period of time as the historical trajectory, for example, the trajectory within 3 seconds;
- the heading of the car lights is used as a priori information to predict the target trajectory of the vehicle in a preset time period in the future, for example, predict the trajectory of the vehicle in the next 3 seconds; in this way, it can provide a highly accurate future driving for autonomous vehicles Trajectory.
- the drivable area of the vehicle is reduced by using the control information of the vehicle, and the candidate trajectory with the highest confidence among the candidate trajectories contained in the drivable area is used as the target trajectory of the vehicle, thereby making the prediction
- the result is more reliable, and the safety of practical applications is improved.
- the embodiment of the present disclosure provides a trajectory prediction method.
- a trained neural network may be used to predict the reference end point of a moving object in step S101, as shown in FIG. 2A, which is an embodiment of the trajectory prediction method of the present disclosure.
- FIG. 2A is an embodiment of the trajectory prediction method of the present disclosure.
- Step S201 Predict the candidate end point of the moving object based on the location information of the moving object through a neural network.
- the neural network is a trained neural network, which can be obtained by training in the following multiple ways:
- Manner 1 First, input the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain the first predicted end point.
- the position information of the moving object is used as the input of the neural network to predict the first predicted end point; or the position information of the moving object and the road image collected by the moving object are used as the input of the neural network to predict the first predicted end point.
- the first prediction loss of the neural network with respect to the first prediction end point is determined.
- the neural network input the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain multiple candidate trajectories, and then estimate each The rough confidence of the candidate is then combined with the true value trajectory to determine the accuracy of each trajectory in the candidate trajectory set, and this accuracy is fed back to the neural network so that the neural network can adjust network parameters such as weight parameters to improve the neural network.
- the accuracy of network classification is
- the neural network to perform operations such as convolution and deconvolution to obtain the confidence of these 100 candidate trajectories; since the parameters of the neural network are initialized randomly during the training phase, this leads to the 100 The confidence of the rough estimate of the candidate trajectories is also random. If you want to improve the accuracy of the candidate trajectories predicted by the neural network, you need to tell the neural network which of the 100 candidate trajectories are right and wrong. Based on this, the comparison function is used to compare 100 candidate trajectories with the true value trajectory.
- the comparison function will output 100 The comparison value (0,1) value; next, input these 100 comparison values into the neural network, so that the neural network uses the loss function to supervise the candidate trajectory, so that the candidate trajectory with a comparison value of 1 is increased
- the confidence of the candidate trajectory is reduced; in this way, the confidence of each candidate trajectory is obtained, that is, the classification result of the candidate trajectory is obtained.
- the trajectory prediction loss corresponding to the classification result is used to adjust the weight parameters of the neural network.
- the network parameters of the neural network are adjusted to train the neural network.
- the weight parameter is the weight of the neuron in the neural network.
- the first prediction loss is a cross-entropy loss of a first-type candidate trajectory sample (for example, a positive sample) and the second-type candidate trajectory sample (for example, a negative sample).
- the prediction loss is used to adjust parameters such as the weight of the neural network, so that the adjusted neural network classification result is more accurate.
- Manner 2 First, input the location information of the moving object and the map information corresponding to the location information into the neural network to obtain a second predicted end point.
- the map information includes at least the geographic location of the current road, road width, road edge, and roadblock information.
- the second prediction loss of the neural network with respect to the second prediction end point is determined.
- the true value trajectory is compared with the second predicted end point to determine the second predicted loss of the neural network with respect to the second predicted end point.
- the deviation between the second predicted end point and the preset constraint condition is determined.
- the preset constraint conditions include the area where the predicted end point can exist in the road, for example, the area on the road excluding road edge points, obstacles, and pedestrians; for example, determining the second predicted end point and the area that can exist Predict the deviation between the regions of the end point.
- the second prediction loss of the second prediction endpoint is adjusted to obtain the third prediction loss.
- the deviation when the deviation is relatively large, it means that the second prediction end point is seriously deviated from the area where the prediction end point can exist, and the second prediction loss is appropriately adjusted to adjust the network parameters of the neural network.
- the network parameters of the neural network are adjusted to train the neural network.
- the above method 1 and method 2 are the training process of the neural network. Based on the position information of the moving image and the prediction loss, multiple iterations are performed to make the trajectory prediction loss of the candidate trajectory output by the trained neural network meet the convergence condition, so that The target trajectory output by the neural network is more accurate.
- Step S202 Determine the position information of the reference end point of the moving object according to the candidate end point.
- the position information of the reference end point is determined according to the candidate end point output by the neural network, or the position information of the reference end point is determined by combining the candidate end point output by the neural network and the environmental information.
- step S201 and step S202 can be implemented in two ways:
- Manner 1 First, input the position information of the moving object into a first neural network to predict the first candidate end point of the moving object.
- the position information of the reference end point of the moving object is determined.
- the first candidate destination is combined with environmental information such as road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information of the moving object to perform a comprehensive analysis to predict
- environmental information such as road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information of the moving object to perform a comprehensive analysis to predict
- the arrived first candidate end point that overlaps with the pedestrian or sidewalk or exceeds the edge of the road is adjusted to obtain the position information of the reference end point with higher accuracy.
- Manner 2 First, input the location information and environment information of the moving object into a second neural network to predict the second candidate end point of the moving object.
- the historical trajectory of the moving object, road information, obstacle information, pedestrian information, traffic light information, traffic sign information, traffic rule information, and other moving object information are used as the input of the second neural network to predict the first position of the moving object. 2.
- the position information of the reference end point of the moving object is determined.
- the predicted second candidate end point is in the drivable area based on the environmental information.
- the neural network is trained using the truth-value trajectory, the candidate trajectory set, and the prediction loss, so that the trained neural network can output a target trajectory that is closer to the truth-value trajectory, so that it can be better applied to mobile
- the prediction of the future target trajectory of the object, and the accuracy of the predicted target trajectory is improved.
- FIG. 2B is a schematic diagram of the implementation process of the neural network training method of the embodiment of the present disclosure. As shown in FIG. 2B, the following description will be made in conjunction with FIG. 2B:
- Step S211 Determine the reference end point of the moving object according to the acquired position information of the moving object.
- the reference end point of the moving object is determined.
- Step S212 Determine M estimated end points in the preset area including the reference end points.
- M estimated end points are determined within a preset area of the reference route that includes the reference end point. First, according to the width of each reference route, determine the preset area of the reference end point of each reference route; then, divide the preset area of the reference end point of each reference route into M equal sizes Grids, using the centers of M grids as the M estimated end points.
- Step S213 According to the historical trajectory, the M estimated end points and N preset distances, correspondingly generate M ⁇ N candidate trajectories to obtain the candidate trajectory set.
- the preset distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the historical trajectory to the candidate trajectory; wherein, M and N are both greater than 0 Integer.
- M and N are both greater than 0 Integer.
- firstly determine the midpoint between the last sampling point in the historical trajectory and the reference end point; secondly, determine N preset distances and the midpoint according to the N preset distances and the midpoint.
- Estimated point; the estimated point is a point on the candidate trajectory.
- the preset distance is the distance between the midpoint of the last sampling point in the historical trajectory and the reference end point to the candidate trajectory. Therefore, N preset distances correspond to N pre-estimated points.
- N candidate trajectories can be fitted by fitting, and then based on N pre-estimated points and M estimated end points, M ⁇ N candidate trajectories can be fitted by fitting.
- Step S214 Determine the average distance between each candidate trajectory in the candidate trajectory set and the true value trajectory.
- the distance between each candidate trajectory and the true value trajectory is first determined, and then the multiple distances obtained are averaged.
- Step S215 Determine the candidate trajectory whose average distance is less than the preset distance threshold as the first-type candidate trajectory sample.
- the candidate trajectory whose average distance is less than the preset distance threshold indicates that the gap between the candidate trajectory and the true value trajectory is small.
- the first type of candidate trajectory samples can also be understood as the output value of the comparison function.
- Step S216 Determine at least a part of candidate trajectories in the candidate trajectory set except for the first-type candidate trajectory samples as second-type candidate trajectory samples.
- first, all or part of the candidate trajectories in the candidate trajectory set except for the first-type candidate trajectory samples are determined to be the second-type candidate trajectory samples.
- the number of candidate trajectory samples of the second type is determined according to the ratio of 3:1 between the candidate trajectory samples of the second type and the candidate trajectory samples of the first type.
- the first type of candidate trajectory samples are closer to the true value trajectory than the second type of candidate trajectory samples. From a certain perspective, it can be understood that the first type of candidate trajectory samples are more reliable than the second type of candidate trajectory samples. letter.
- the ratio of the second-type candidate trajectory samples to the first-type candidate trajectory samples is set to 3:1, which reduces the excessive number of second-type candidate trajectory samples, and has a dominant effect on the trajectory prediction loss corresponding to the classification result, thus This leads to unsatisfactory training results for the neural network.
- steps S214 to S216 give a way to compare the true value trajectory of the moving object with the candidate trajectory to determine the classification result of the candidate trajectory, in which the first type of candidate trajectory is determined.
- the samples and the second type of candidate trajectory samples complete the process of classifying candidate trajectories.
- both the true trajectory and the candidate trajectory are input to the comparison function. If the similarity between the candidate trajectory and the true trajectory is greater than the preset similarity threshold, the comparison function outputs 1; otherwise, the comparison function outputs 0. In this way, the accuracy of the classification is further improved.
- Step S217 Determine the cross entropy loss of the first type candidate trajectory sample and the second type candidate trajectory sample, where the cross entropy loss is the trajectory prediction loss.
- step S2128 the trajectory prediction loss corresponding to the classification result is used to adjust the network parameters in the neural network to train the neural network.
- a larger number of data set trajectories are used to train the neural network. Since the data set contains complex urban scenes, the data is collected from the perspective of an autonomous vehicle, which is closer to practical applications.
- the neural network trained on the data set is suitable for trajectory prediction in various scenarios, so that the trained neural network predicts the target trajectory with higher accuracy.
- step S213 the method further includes:
- Step S231 using the neural network to determine the trajectory parameter adjustment value of the candidate trajectory.
- the adjustment value may be a prediction deviation between the candidate trajectory predicted by the neural network and the true value trajectory.
- Step S232 Determine the deviation between the candidate trajectory and the true value trajectory.
- the deviation may be the true difference between the end point coordinates of the true value trajectory and the end point coordinates of the candidate trajectory.
- Step S233 Determine the adjusted forecast loss according to the deviation and the adjustment value.
- Step S234 using the adjusted loss to adjust the weight parameter of the preset neural network, so that the adjusted prediction loss of the preset neural network output meets the convergence condition.
- the adjustment loss is Euclidean distance loss, and based on the Euclidean distance loss, the weight parameter of the neural network is adjusted, so that the gap between the candidate trajectory and the true value trajectory is smaller.
- a knowledge candidate trajectory network which integrates prior knowledge into vehicle trajectory prediction.
- the vehicle trajectory is modeled as a continuous curve parameterized by the end point and the distance parameter r.
- the vehicle trajectory model is robust to noise and provides a more flexible way to integrate prior knowledge into the trajectory prediction; then , Formulate vehicle trajectory prediction as a task of candidate trajectory generation and refinement.
- the classification module selects For the best candidate trajectory, the refinement module performs trajectory regression and predicts the end point) to generate the final trajectory prediction. In this way, movement and intention can be reflected more naturally, and the anti-noise performance is stronger, and the prior information can be more flexibly integrated into the learning channel.
- the embodiments of the present disclosure provide a large-scale vehicle trajectory data set and new evaluation criteria in order to evaluate the proposed method and better promote vehicle prediction research in autonomous driving.
- This new data set contains millions of vehicle trajectories in complex urban driving scenes, and each vehicle has richer information, such as vehicle control information and/or road structure information for at least some vehicles.
- experiments of different time lengths are performed and the fitting error of the trajectory of length T is calculated.
- the time T is set to 6 seconds.
- the fitted trajectory obtained after cubic curve fitting is performed on the predicted point is the candidate trajectory.
- the embodiment of the present disclosure uses two control points, namely the end point and the preset distance ⁇ , and the sampling points on the historical trajectory to represent the curve.
- FIG 3A is a schematic diagram to realize the network structure of the candidate trajectory embodiment disclosed embodiment, shown in Figure 3A, vehicle 41 acquired position information p in, the control information l in, d in the vehicle and orientation information road surface information (for example, limit the line width of the road In peacetime traffic jams, etc.), this information is the detection result of the automatic driving system, except for road information, is the historical time period information of the vehicle to be predicted.
- the road information is the map information around the vehicle to be predicted at the current moment.
- the basic feature 42 is generated by the basic feature encoding module, and based on these basic features, the future end point is predicted, and the reference end point is obtained.
- a set of cubic fitting curves 43 are obtained as candidate trajectories; then, the road surface information and the lamp states 420 of other vehicles on the road are used as constraints to constrain the generated candidate trajectories, and obtain The candidate trajectory 44 (that is, a candidate trajectory set including a plurality of candidate trajectories).
- the candidate trajectory 44 (that is, a candidate trajectory set including a plurality of candidate trajectories).
- the candidate trajectories 44 are classified to obtain a classification result 45, where the first type of candidate trajectory samples and the second type of candidate trajectory samples are used to generate candidate trajectory features using convolutional layers.
- the classification module determines the drivable area 46 of the vehicle according to the basic function and the candidate trajectory function.
- FIG. 3B is a schematic diagram of the implementation structure of the candidate trajectory network of the embodiment of the disclosure. As shown in FIG. 3B, the whole process is divided into two stages. In the first stage 81, in the basic feature encoding module 808, the historical trajectory P obs 801 And the surrounding road information r Tobs 803 is input into the coding network CNN802, and the rough predicted end point 82 is output.
- the candidate trajectory correction module 811 of the second stage 84 input the candidate trajectory 83 into the network for classification (CNN-ED) 85, perform classification 86 and correction 87, output the maximum confidence level of 88, and obtain the predicted position of the vehicle 814 , Based on these predicted positions, the final running trajectory can be generated.
- the drivable area is delineated, and the candidate trajectories outside the drivable area are eliminated.
- Candidate trajectories outside the driving area will be eliminated. It can be seen from Figure 3B that the actual future position 815 of the vehicle driving in the next few minutes is basically consistent with the predicted position 814.
- the prediction method based on deep learning can be more explanatory And flexibility.
- the second stage of the knowledge candidate trajectory network can simplify the prediction problem by selecting the most reasonable trajectory.
- the basic feature encoding module is designed as an encoder-decoder network.
- the network takes (p, l, d, r) as input in the time interval [0, Tobs], where p represents position and l represents control Information, d represents the direction of the car, r represents local road information.
- the attributes of the vehicle (l, d) are obtained through a model based on Deep Neural Networks (DNN).
- DNN Deep Neural Networks
- p t (x t , y t ) is the coordinate of the vehicle
- l t (b lt , l tt , r tt ) represents the brake light, left turn light and right turn light, and They are binary values
- d t (dx t , dy t ) is a unit vector.
- the encoder and decoder are composed of several convolution and deconvolution layers, respectively.
- the embodiment of the present application first, basic features are used to predict a rough end point to reduce the search space of candidate trajectories. Then, two steps are taken to generate candidate trajectories.
- the present embodiment of the present disclosure may in fact be a set of cubic curve fit.
- ⁇ the distance between the midpoint of the candidate trajectory and the connection line between the last input point and the end point, which is used to control the degree of curvature of the curve.
- Point 51 represents the last sampling point on the historical trajectory
- point 52 represents the reference end point predicted based on the historical trajectory
- points 53 and 54 respectively represent the The center of the grid divided in the preset area of point 52 is the estimated end point
- ⁇ is the distance between the midpoint of the line between point 51 and point 52 and the candidate trajectory, and the size of ⁇ is set in advance (for example, set to (-2m, 2m)), in this way, based on the value of ⁇ , the estimated end point, and the last sampling point on the historical trajectory, multiple candidate trajectories with different degrees of curvature (that is, candidates including multiple candidate trajectories) can be determined Trajectory collection).
- FIG. 4B is a schematic diagram of the process of generating candidate trajectories on multiple reference routes according to an embodiment of the present disclosure.
- the formality is weak. Since the road has strict constraints on vehicles, the multi-mode candidate trajectory generation will use road information to generate multiple end points. It can be seen from Fig. 4B that the current position of the vehicle is an intersection. According to the basic information of the road information (for example, lane line 901 (reference line on the road in Fig. 4B) and running direction, etc.) and the historical trajectory 902 of the vehicle, A set of reference routes 91 (located on each road at the intersection, for example, reference lines 904, 905, and 906), these reference routes represent the center lane lines that vehicles may reach. Therefore, formula (2) can be extended to generate multiple candidate trajectory sets for different reference routes.
- f () represents a cubic polynomial function, p 'ep ⁇ p ep and ⁇ [-2, -1,0,1,2].
- a binary class label indicating a good trajectory or not, is assigned to each candidate trajectory.
- the average distance between the points on the uniformly sampled true-value trajectory and the candidate trajectory is defined as the criterion of the candidate trajectory, as shown in formula (3):
- N is the number of sampling points, with They are the i-th sampling point of the true value trajectory and the candidate trajectory respectively.
- the candidate trajectory whose AD value is lower than the preset threshold is determined as a positive sample.
- the preset threshold is 2m
- the candidate trajectory whose average distance between the candidate trajectory and the true value trajectory is less than 2m is determined as a positive sample.
- the sample indicates that the gap between the candidate trajectory and the true value trajectory is smaller and closer to the true value trajectory.
- the remaining candidate trajectories are potential negative samples.
- the embodiment of the present disclosure adopts a uniform sampling method to keep the ratio between the negative samples and the positive samples at 3:1.
- the embodiment of the present disclosure adopts parameterization of 2 coordinates and 1 variable, as shown in formula (4):
- t x , t y and t ⁇ are supervised information.
- the embodiment of the present disclosure defines the multi-task loss function minimization as shown in formula (5):
- L cls represents the loss function of the two types of samples.
- the cross entropy loss of the two types of samples is used as L cls ;
- L ref represents the loss function for modifying the positive trajectory parameter, and the embodiment of the present disclosure uses Euclidean loss As L ref . Due to the multi-modal characteristics of the trajectory, the embodiments of the present disclosure use positive samples and partially randomly sampled negative samples to calculate the refinement loss function, and use ⁇ to control the ratio of sampling negative samples.
- the future trajectory of a vehicle is not only affected by history, but also restricted by rules such as road structure and control information. Combining these rules can make a more reliable prediction of the future trajectory of the target.
- the knowledge candidate trajectory network of the embodiments of the present disclosure can effectively solve these problems and obtain a very reliable predicted trajectory.
- the embodiments of the present disclosure combine historical running trajectories and high-definition maps to determine a polygonal area composed of lane lines on which vehicles can travel in the future, that is, a driving area.
- the basic rule for determining the drivable area is that the vehicle can only travel on lanes in the same direction.
- the polygonal area is the destination lane, otherwise the drivable area consists of all possible lanes.
- the embodiments of the present disclosure propose two methods for implementing road constraints, such as a method of not ignoring candidate trajectories outside the drivable area and a method of ignoring candidate trajectories outside the drivable area.
- the method of not ignoring candidate trajectories outside the drivable area takes the drivable area as an input function and implicitly supervises the model to learn such rules.
- the embodiments of the present disclosure propose to ignore candidate trajectories outside the drivable area to clearly restrict the candidate trajectories in inference. For ignoring candidate trajectories outside the drivable area, the candidate trajectories outside the drivable area are ignored during generation. In addition, the embodiment of the present disclosure constrains the candidate trajectory by attenuating the classification score of the candidate trajectory outside the drivable area during the test, as shown in formula (6):
- r represents the probability that the candidate trajectory points outside the drivable area
- ⁇ represents the attenuation factor
- Control information is a clear signal that instructs the vehicle to pay attention. Similar to road constraints, the control information will restrict the drivable area in a certain direction, so that it can be used to further narrow the drivable area generated by road constraints. For vehicles at an intersection, the drivable area is fully opened to four directions.
- the embodiment of the present disclosure uses the hint of the turn signal to select a unique road as a drivable mask, so that the drivable mask can be reduced. For vehicles in the lane, the embodiment of the present disclosure may also choose to attenuate the score of the corresponding candidate trajectory during the test, as shown in Equation 6.
- the embodiments of the present disclosure redefine the trajectory of the vehicle, thereby enabling reliable vehicle motion prediction; this well reflects the trend and intention of vehicle motion, and is robust to noise; and a large amount of information-rich data sets are collected, Experiments on the data set have proved the effectiveness of the method proposed in the embodiments of the present disclosure. At the same time, more standardized rules, such as traffic lights, can be easily extended to the solutions of the embodiments of the present disclosure.
- the technical solutions of the embodiments of the present disclosure essentially or the parts that contribute to the prior art can be embodied in the driving of a software product.
- the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a terminal, a server, etc.) executes all or part of the methods described in the various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present disclosure are not limited to any specific combination of hardware and software.
- FIG. 5 is a schematic diagram of the structure composition of the trajectory prediction device of the embodiment of the disclosure. As shown in FIG. 5, the device 500 includes:
- the reference end point prediction module 501 is configured to determine the position information of the reference end point of the moving object according to the position information of the moving object;
- any method may be adopted, and the method may be determined based on the position information of the moving object.
- a method of determining an end point using location information and a machine learning model such as feature learning or reinforcement learning can be adopted.
- the machine learning model can input at least the position information of the moving object and output the end point or the trajectory including the end point.
- the reference end point prediction module 501 can determine the end point by using the location information and environment information around the moving object as the input of the machine learning model.
- the reference end point prediction module 501 may use the position information of the moving object as the input of the machine learning model, and use the output of the machine learning model and the environmental information around the moving object to determine the end point.
- the reference end point prediction module 501 may use the location information and environment information around the moving object as the input of the machine learning model, and use the output of the machine learning model and the environment information around the moving object to determine the end point. For example, the reference destination prediction module 501 may determine the trajectory of the moving object as the output of the machine learning model, adjust the determined trajectory based on the environmental information around the moving object, so that the trajectory does not overlap with pedestrians or sidewalks, and determine the adjusted trajectory End point contained in.
- a method using position information and a kinematic model (kinematic model) of the moving object can be adopted.
- the reference end point prediction module 501 can use the position information, the motion model of the moving object, and the environment information around the moving object to determine the end point.
- the candidate trajectory determination module 502 is configured to determine a candidate trajectory set including a plurality of candidate trajectories according to the position information of the moving object and the position information of the reference end point; wherein, the position information of the end point of each candidate trajectory is consistent with the position information of the reference end point.
- the position information of the reference end point is different;
- the target trajectory determining module 503 is configured to determine the target trajectory of the moving object from the set of candidate trajectories.
- the position information of the moving object includes: time series position information of the moving object, or the historical trajectory of the moving object.
- the reference end point includes points other than a preset restriction type, where the preset restriction type includes at least one of the following: road edge points, obstacles, and pedestrians.
- the reference end point prediction module 501 includes:
- the environmental information acquisition submodule is configured to acquire environmental information of the mobile object according to the position information of the mobile object, the environmental information including at least one of the following: road information, obstacle information, pedestrian information, traffic light information, traffic Identification information, traffic regulation information, and other moving object information.
- the first reference end point prediction sub-module is configured to determine the position information of the reference end point of the moving object according to the environmental information.
- the environmental information acquiring submodule is further configured to:
- the environment information is determined according to the communication information that characterizes the current environment received by the mobile object.
- the reference end point prediction module 501 includes:
- the reference route determination submodule is configured to determine at least one reference route of the moving object according to the position information of the moving object;
- the second reference destination prediction sub-module is configured to determine the location information of the reference destination according to the reference route.
- the second reference endpoint prediction sub-module includes:
- the drivable area determining unit is configured to determine the drivable area of the reference route
- the reference end point predicting unit is configured to determine the position information of the reference end point of the moving object in the drivable area according to the position information of the moving object.
- the reference end point prediction module 501 includes:
- intersection determination submodule configured to determine intersection information of the road where the mobile object is located according to the position information of the mobile object
- the multi-reference end point determination sub-module is configured to determine the position information of multiple reference end points of the moving object in response to the intersection information indicating that there are at least two intersections; wherein the reference end points of different intersections are different.
- the candidate trajectory determination module 503 includes:
- a confidence determination submodule configured to determine the confidence of the candidate trajectory in the candidate trajectory set
- the target trajectory determination submodule is configured to determine the target trajectory of the moving object from the set of candidate trajectories according to the driving information of the moving object and the confidence level.
- the device further includes:
- a correction value determining module configured to determine a trajectory parameter correction value of at least one candidate trajectory in the candidate trajectory set
- a trajectory adjustment module configured to adjust candidate trajectories in the candidate trajectory set according to the trajectory parameter correction value to update the candidate trajectory set;
- the updated target trajectory determining module is configured to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the driving information of the moving object and the confidence level.
- the updated target trajectory determination module includes:
- the drivable area determination sub-module is configured to determine the drivable area of the mobile object according to the environmental information of the mobile object and/or the control information of the mobile object;
- the updated target trajectory determination submodule is configured to determine the target trajectory of the moving object from the updated set of candidate trajectories according to the drivable area and the confidence level.
- the drivable area determination sub-module includes:
- a predicted drivable area determining unit configured to determine the predicted drivable area of the mobile object according to the environmental information of the mobile object
- the predictable drivable area whole unit is configured to adjust the predicted drivable area according to the control information of the moving object to obtain the drivable area.
- the updated target trajectory determination sub-module includes:
- a target trajectory set determining unit configured to determine candidate trajectories included in the drivable area in the updated candidate trajectory set to obtain the to-be-determined target trajectory set;
- the target trajectory screening unit is configured to determine the trajectory with the greatest confidence in the set of target trajectories to be determined or the trajectory with the confidence greater than a preset confidence threshold as the target trajectory.
- the candidate trajectory determining module 502 includes:
- the estimated end point determination sub-module is configured to determine M estimated end points in a preset area containing the reference end point
- the candidate trajectory generation sub-module is configured to generate M ⁇ N candidate trajectories corresponding to the position information of the moving object, the M estimated end points, and N preset distances to obtain the candidate trajectory set; wherein, the The preset distance is used to indicate the distance from the midpoint of the line between the last sampling point and the reference end point in the position information of the moving object to the candidate trajectory; wherein, M and N are both integers greater than 0.
- the estimated end point determination sub-module includes:
- the preset area determining unit is configured to determine the preset area of the reference end point according to the width of the road where the reference end point is located;
- the grid dividing unit is configured to divide the preset area of the reference end point into M grids of the same size, and use the centers of the M grids as the M estimated end points.
- the candidate trajectory generation sub-module includes:
- a midpoint determining unit configured to determine the midpoint of the line between the last sampling point in the position information of the moving object and the reference end point;
- a pre-estimation point determining unit configured to determine N pre-estimation points according to the N preset distances and the midpoint;
- M ⁇ N candidate trajectory generating units configured to generate M ⁇ N candidate trajectories according to the N pre-estimated points and the M estimated end points;
- the candidate trajectory screening unit is configured to screen the M ⁇ N candidate trajectories according to the environmental information to obtain the candidate trajectory set.
- the reference end point prediction module 501 includes:
- a candidate end point prediction sub-module configured to predict the candidate end point of the moving object according to the position information of the moving object through a neural network
- the reference end point determination submodule is configured to determine the position information of the reference end point of the moving object according to the candidate end point.
- the candidate end point prediction sub-module is further configured to input the position information of the moving object into the first neural network to predict the first candidate end point of the moving object;
- the reference end point determination submodule is further configured to determine the position information of the reference end point of the moving object according to the first candidate end point and the environment information of the moving object.
- the candidate endpoint prediction submodule is further configured to input the location information and environment information of the moving object into the second neural network to predict the second candidate endpoint of the moving object;
- the reference end point determination submodule is further configured to determine the position information of the reference end point of the moving object according to the second candidate end point and the environment information.
- the device further includes: a network training module configured to train the neural network;
- the network training module includes:
- the first network input sub-module is configured to input the position information of the moving object, and/or the position information of the moving object and the road image collected by the moving object into the neural network to obtain the first predicted end point;
- the first prediction loss determining submodule is configured to determine the first prediction loss of the neural network with respect to the first prediction end point according to the true value trajectory of the moving object;
- the first network parameter adjustment submodule is configured to adjust the network parameters of the neural network according to the first prediction loss to train the neural network.
- the network training module includes:
- the second network input submodule is configured to input the location information of the moving object and the map information corresponding to the location information into the neural network to obtain a second predicted end point;
- a second prediction loss determining sub-module configured to determine the second prediction loss of the neural network with respect to the second prediction end point according to the true value trajectory of the moving object
- a deviation determining sub-module configured to determine the deviation between the second predicted end point and a preset constraint condition
- the second prediction loss adjustment submodule is configured to adjust the second prediction loss of the second prediction endpoint according to the deviation to obtain a third prediction loss
- the second network parameter adjustment sub-module is configured to adjust the network parameters of the neural network according to the third prediction loss to train the neural network.
- an embodiment of the present disclosure further provides a computer program product.
- the computer program product includes computer-executable instructions. After the computer-executable instructions are executed, the steps in the trajectory prediction method provided by the embodiments of the present disclosure can be implemented.
- an embodiment of the present disclosure further provides a computer storage medium having computer-executable instructions stored thereon, and when the computer-executable instructions are executed by a processor, the trajectory prediction method provided by the above-mentioned embodiment is implemented. step.
- FIG. 6 is a schematic diagram of the structure of the computer device according to an embodiment of the disclosure.
- the device 600 includes: a processor 601, at least one communication bus, and communication Interface 602, at least one external communication interface and memory 603.
- the communication interface 602 is configured to realize connection and communication between these components.
- the communication interface 602 may include a display screen, and the external communication interface may include a standard wired interface and a wireless interface.
- the processor 601 is configured to execute the image processing program in the memory to implement the steps of the method for predicting the target trajectory provided in the foregoing embodiment.
- trajectory prediction device computer equipment, and storage medium embodiment
- the description of the method embodiments of the present disclosure please refer to the description of the method embodiments of the present disclosure for understanding.
- the disclosed device and method may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
- the coupling, or direct coupling, or communication connection between the various components shown or discussed may be through some interfaces, indirect coupling or communication connection between devices or units, and may be electrical, mechanical, or other driving. of.
- the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the embodiments of the present disclosure can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
- the unit can be realized by hardware driving, or by hardware plus software functional unit.
- the foregoing program can be stored in a computer readable storage medium.
- the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
- ROM Read Only Memory
- the above-mentioned integrated unit of the present disclosure is realized by the driving of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
- the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in the various embodiments of the present disclosure.
- the aforementioned storage media include: removable storage devices, ROMs, magnetic disks, or optical disks and other media that can store program codes.
- the embodiments of the present disclosure provide a trajectory prediction method, device, equipment, and storage medium, wherein the position information of the reference end point of the moving object is determined according to the position information of the moving object; according to the position information of the moving object and the Determine the candidate trajectory set including multiple candidate trajectories with reference to the position information of the end point; wherein the position information of the end point of each candidate trajectory is different from the position information of the reference end point; determine the moving object from the set of candidate trajectories Target trajectory. In this way, the future motion trajectory of the moving object can be estimated more accurately.
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (24)
- 一种轨迹预测方法,其中,所述方法由电子设备执行,所述方法包括:根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,至少二条所述候选轨迹的终点的位置信息与所述参考终点的位置信息不同;从所述候选轨迹集合中确定所述移动对象的目标轨迹。
- 根据权利要求1所述的方法,其中,所述移动对象的位置信息包括:所述移动对象的时序位置信息,或,所述移动对象的历史轨迹。
- 根据权利要求1所述的方法,其中,所述参考终点包括预设限制类型之外的点,其中,所述预设限制类型至少包括以下之一:道路边缘点、障碍物、行人。
- 根据权利要求1所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:根据所述移动对象的位置信息获取所述移动对象的环境信息,所述环境信息包括以下至少之一:道路信息、障碍物信息、行人信息、交通灯信息、交通标识信息、交通规则信息、其他移动对象信息;根据所述环境信息确定所述移动对象的参考终点的位置信息。
- 根据权利要求4所述的方法,其中,所述根据所述移动对象的位置信息获取所述移动对象的环境信息,包括:根据所述移动对象采集的图像信息,确定所述环境信息;和/或,根据所述移动对象接收到表征当前环境的通信信息,确定所述环境信息。
- 根据权利要求1所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:根据所述移动对象的位置信息,确定所述移动对象的至少一条参考路线;根据所述参考路线,确定所述参考终点的位置信息。
- 根据权利要求6所述的方法,其中,所述根据所述参考路线,确定所述参考终点的位置信息,包括:确定所述参考路线的可行驶区域;根据所述移动对象的位置信息,确定所述移动对象在所述可行驶区域中的参考终点的位置信息。
- 根据权利要求1所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:根据所述移动对象的位置信息确定所述移动对象所处道路的路口信息;响应于所述路口信息表示存在至少二个路口,确定所述移动对象的多个参考终点的位置信息;其中,不同路口的参考终点不同。
- 根据权利要求1所述的方法,其中,所述根据所述候选轨迹集合,确定所述移动对象的目标轨迹,包括:确定所述候选轨迹集合中的候选轨迹的置信度;根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹。
- 根据权利要求9所述的方法,其中,根据所述移动对象的行驶信息和所述置信度,从所述候选轨迹集合中确定所述移动对象的所述目标轨迹之前,还包括:确定所述候选轨迹集合中至少一候选轨迹的轨迹参数修正值;根据所述轨迹参数修正值对所述候选轨迹集合中的候选轨迹进行调整,得到更新的候选轨迹集合;根据所述移动对象的行驶信息和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的所述目标轨迹。
- 根据权利要求9或10所述的方法,其中,根据所述移动对象的行驶信息和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的所述目标轨迹,包括:根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域;根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹。
- 根据权利要求11所述的方法,其中,所述根据所述移动对象的环境信息和/或所述移动对象的控制信息,确定所述移动对象的可行驶区域,包括:根据所述移动对象的环境信息,确定所述移动对象的预测可行驶区域;根据所述移动对象的控制信息,对所述预测可行驶区域进行调整,得到所述可行驶区域。
- 根据权利要求11所述的方法,其中,所述根据所述可行驶区域和所述置信度,从所述更新的候选轨迹集合中确定所述移动对象的目标轨迹,包括:确定所述更新的候选轨迹集合中包含于所述可行驶区域的候选轨迹,得到所述待确定目标轨迹集合;将所述待确定目标轨迹集合中置信度最大的轨迹或者置信度大于预设置信度阈值的轨迹,确定为所述目标轨迹。
- 根据权利要求1至13任一项所述的方法,其中,所述根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合,包括:在包含所述参考终点的预设区域内,确定M个估计终点;根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合;其中,所述预设距离用于表明所述移动对象的位置信息中最后一个采样点与所述参考终点连线的中点到所述候选轨迹的距离;其中,M和N均为大于0的整数。
- 根据权利要求14所述的方法,其中,所述在包含所述参考终点的预设区域内,确定M个估计终点,包括:根据所述参考终点所处道路的宽度,确定所述参考终点的预设区域;将所述参考终点的预设区域划分为M个预定尺寸的网格,将M个网格的中心作为所述M个估计终点。
- 根据权利要求14所述的方法,其中,所述根据所述移动对象的位置信息、所述M个估计终点和N个预设距离,对应生成M×N个候选轨迹,得到所述候选轨迹集合,包括:确定所述移动对象的位置信息中的最后一个采样点与所述参考终点连线的中点;根据所述N个预设距离和所述中点,确定N个预估计点;根据所述N个预估计点和所述M个估计终点,生成M×N个候选轨迹;根据所述环境信息,对所述M×N个候选轨迹进行筛选,得到所述候选轨迹集合。
- 根据权利要求1至16任一项所述的方法,其中,所述根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息,包括:通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点;根据所述候选终点确定所述移动对象的参考终点的位置信息。
- 根据权利要求17所述的方法,其中,所述通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点,包括:将所述移动对象的位置信息输入第一神经网络以预测所述移动对象的第一候选终点;所述根据所述候选终点确定所述移动对象的参考终点的位置信息,包括:根据所述第一候选终点和所述移动对象的环境信息,确定所述移动对象的参考终点的位置信息。
- 根据权利要求17或18所述的方法,其中,所述通过神经网络根据所述移动对象的位置信息预测所述移动对象的候选终点,包括:将所述移动对象的位置信息和环境信息输入第二神经网络以预测所述移动对象的第二候选终点;所述根据所述候选终点确定所述移动对象的参考终点的位置信息,包括:根据所述第二候选终点和所述环境信息,确定所述移动对象的所述参考终点的位置信息。
- 根据权利要求17至19任一项所述的方法,其中,所述神经网络的训练方法包括:将所述移动对象的位置信息,和/或,所述移动对象的位置信息和移动对象采集的道路图像输入神经网络中,得到第一预测终点;根据所述移动对象的真值轨迹,确定所述神经网络关于第一预测终点的第一预测损失;根据所述第一预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
- 根据权利要求17至20任一项所述的方法,其中,所述神经网络的训练方法包括:将所述移动对象的位置信息和所述位置信息对应的地图信息输入所述神经网络中,得到第二预测终点;根据所述移动对象的真值轨迹,确定所述神经网络关于所述第二预测终点的第二预测损失;确定所述第二预测终点与预设的约束条件之间的偏差;根据所述偏差,对所述第二预测终点的第二预测损失进行调整,得到第三预测损失;根据所述第三预测损失,对所述神经网络的网络参数进行调整,以训练所述神经网络。
- 一种轨迹预测装置,其中,所述装置包括:参考终点预测模块,配置为根据移动对象的位置信息,确定所述移动对象的参考终点的位置信息;候选轨迹确定模块,配置为根据所述移动对象的位置信息和所述参考终点的位置信息,确定包括多条候选轨迹的候选轨迹集合;其中,至少二条所述候选轨迹的终点的位置信息与所述参考终点的位置信息不同;目标轨迹确定模块,配置为从所述候选轨迹集合中确定所述移动对象的目标轨迹。
- 一种计算机存储介质,其中,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现权利要求1至21任一项所述的方法步骤。
- 一种电子设备,其中,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时可实现权利要求1至21任一项所述的方法步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022519830A JP7338052B2 (ja) | 2020-04-10 | 2021-04-02 | 軌跡予測方法、装置、機器及び記憶媒体リソース |
US17/703,268 US20220212693A1 (en) | 2020-04-10 | 2022-03-24 | Method and apparatus for trajectory prediction, device and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010279772.9A CN111523643B (zh) | 2020-04-10 | 2020-04-10 | 轨迹预测方法、装置、设备及存储介质 |
CN202010279772.9 | 2020-04-10 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/703,268 Continuation US20220212693A1 (en) | 2020-04-10 | 2022-03-24 | Method and apparatus for trajectory prediction, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021204092A1 true WO2021204092A1 (zh) | 2021-10-14 |
Family
ID=71902658
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/085448 WO2021204092A1 (zh) | 2020-04-10 | 2021-04-02 | 轨迹预测方法、装置、设备及存储介质资源 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20220212693A1 (zh) |
JP (1) | JP7338052B2 (zh) |
CN (1) | CN111523643B (zh) |
WO (1) | WO2021204092A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116543356A (zh) * | 2023-07-05 | 2023-08-04 | 青岛国际机场集团有限公司 | 一种轨迹确定方法、设备及介质 |
CN116723616A (zh) * | 2023-08-08 | 2023-09-08 | 杭州依森匠能数字科技有限公司 | 一种灯光亮度控制方法及系统 |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111523643B (zh) * | 2020-04-10 | 2024-01-05 | 商汤集团有限公司 | 轨迹预测方法、装置、设备及存储介质 |
US11927967B2 (en) * | 2020-08-31 | 2024-03-12 | Woven By Toyota, U.S., Inc. | Using machine learning models for generating human-like trajectories |
CN112212874B (zh) * | 2020-11-09 | 2022-09-16 | 福建牧月科技有限公司 | 车辆轨迹预测方法、装置、电子设备及计算机可读介质 |
CN112197771B (zh) * | 2020-12-07 | 2021-03-19 | 深圳腾视科技有限公司 | 车辆失效轨迹重构方法、设备以及存储介质 |
CN112749740B (zh) * | 2020-12-30 | 2023-04-18 | 北京优挂信息科技有限公司 | 确定车辆目的地的方法、装置、电子设备及介质 |
CN113033364A (zh) * | 2021-03-15 | 2021-06-25 | 商汤集团有限公司 | 轨迹预测、行驶控制方法、装置、电子设备及存储介质 |
CN113119996B (zh) * | 2021-03-19 | 2022-11-08 | 京东鲲鹏(江苏)科技有限公司 | 一种轨迹预测方法、装置、电子设备及存储介质 |
CN112949756B (zh) * | 2021-03-30 | 2022-07-15 | 北京三快在线科技有限公司 | 一种模型训练以及轨迹规划的方法及装置 |
CN113157846A (zh) * | 2021-04-27 | 2021-07-23 | 商汤集团有限公司 | 意图及轨迹预测方法、装置、计算设备和存储介质 |
US20230040006A1 (en) * | 2021-08-06 | 2023-02-09 | Waymo Llc | Agent trajectory planning using neural networks |
CN113447040B (zh) * | 2021-08-27 | 2021-11-16 | 腾讯科技(深圳)有限公司 | 行驶轨迹确定方法、装置、设备以及存储介质 |
CN113568416B (zh) * | 2021-09-26 | 2021-12-24 | 智道网联科技(北京)有限公司 | 无人车轨迹规划方法、装置和计算机可读存储介质 |
CN115230688B (zh) * | 2021-12-07 | 2023-08-25 | 上海仙途智能科技有限公司 | 障碍物轨迹预测方法、系统和计算机可读存储介质 |
CN114312831B (zh) * | 2021-12-16 | 2023-10-03 | 浙江零跑科技股份有限公司 | 一种基于空间注意力机制的车辆轨迹预测方法 |
CN114578808A (zh) * | 2022-01-10 | 2022-06-03 | 美的集团(上海)有限公司 | 路径规划方法、电子设备、计算机程序产品及存储介质 |
CN114954103B (zh) * | 2022-05-18 | 2023-03-24 | 北京锐士装备科技有限公司 | 一种应用于无人机的充电方法及装置 |
CN114913197B (zh) * | 2022-07-15 | 2022-11-11 | 小米汽车科技有限公司 | 车辆轨迹预测方法、装置、电子设备及存储介质 |
CN115790606B (zh) * | 2023-01-09 | 2023-06-27 | 深圳鹏行智能研究有限公司 | 轨迹预测方法、装置、机器人及存储介质 |
CN115861383B (zh) * | 2023-02-17 | 2023-05-16 | 山西清众科技股份有限公司 | 一种拥挤空间下多信息融合的行人轨迹预测装置及方法 |
CN116091894B (zh) * | 2023-03-03 | 2023-07-14 | 小米汽车科技有限公司 | 模型训练方法、车辆控制方法、装置、设备、车辆及介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105809714A (zh) * | 2016-03-07 | 2016-07-27 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | 一种基于轨迹置信度的多目标跟踪方法 |
CN108803617A (zh) * | 2018-07-10 | 2018-11-13 | 深圳大学 | 轨迹预测方法及装置 |
CN109583151A (zh) * | 2019-02-20 | 2019-04-05 | 百度在线网络技术(北京)有限公司 | 车辆的行驶轨迹预测方法及装置 |
CN111523643A (zh) * | 2020-04-10 | 2020-08-11 | 商汤集团有限公司 | 轨迹预测方法、装置、设备及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9989964B2 (en) * | 2016-11-03 | 2018-06-05 | Mitsubishi Electric Research Laboratories, Inc. | System and method for controlling vehicle using neural network |
US10996679B2 (en) | 2018-04-17 | 2021-05-04 | Baidu Usa Llc | Method to evaluate trajectory candidates for autonomous driving vehicles (ADVs) |
CN109272108A (zh) * | 2018-08-22 | 2019-01-25 | 深圳市亚博智能科技有限公司 | 基于神经网络算法的移动控制方法、系统和计算机设备 |
CN109389246B (zh) * | 2018-09-13 | 2021-03-16 | 中国科学院电子学研究所苏州研究院 | 一种基于神经网络的交通工具目的地区域范围预测方法 |
CN109739926B (zh) * | 2019-01-09 | 2021-07-02 | 南京航空航天大学 | 一种基于卷积神经网络的移动对象目的地预测方法 |
-
2020
- 2020-04-10 CN CN202010279772.9A patent/CN111523643B/zh active Active
-
2021
- 2021-04-02 WO PCT/CN2021/085448 patent/WO2021204092A1/zh active Application Filing
- 2021-04-02 JP JP2022519830A patent/JP7338052B2/ja active Active
-
2022
- 2022-03-24 US US17/703,268 patent/US20220212693A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105809714A (zh) * | 2016-03-07 | 2016-07-27 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | 一种基于轨迹置信度的多目标跟踪方法 |
CN108803617A (zh) * | 2018-07-10 | 2018-11-13 | 深圳大学 | 轨迹预测方法及装置 |
CN109583151A (zh) * | 2019-02-20 | 2019-04-05 | 百度在线网络技术(北京)有限公司 | 车辆的行驶轨迹预测方法及装置 |
CN111523643A (zh) * | 2020-04-10 | 2020-08-11 | 商汤集团有限公司 | 轨迹预测方法、装置、设备及存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116543356A (zh) * | 2023-07-05 | 2023-08-04 | 青岛国际机场集团有限公司 | 一种轨迹确定方法、设备及介质 |
CN116543356B (zh) * | 2023-07-05 | 2023-10-27 | 青岛国际机场集团有限公司 | 一种轨迹确定方法、设备及介质 |
CN116723616A (zh) * | 2023-08-08 | 2023-09-08 | 杭州依森匠能数字科技有限公司 | 一种灯光亮度控制方法及系统 |
CN116723616B (zh) * | 2023-08-08 | 2023-11-07 | 杭州依森匠能数字科技有限公司 | 一种灯光亮度控制方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
US20220212693A1 (en) | 2022-07-07 |
CN111523643A (zh) | 2020-08-11 |
JP2022549952A (ja) | 2022-11-29 |
JP7338052B2 (ja) | 2023-09-04 |
CN111523643B (zh) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021204092A1 (zh) | 轨迹预测方法、装置、设备及存储介质资源 | |
US11897518B2 (en) | Systems and methods for navigating with sensing uncertainty | |
US11815904B2 (en) | Trajectory selection for an autonomous vehicle | |
US11835950B2 (en) | Autonomous vehicle safe stop | |
US20230347877A1 (en) | Navigation Based on Detected Size of Occlusion Zones | |
US11868136B2 (en) | Geolocalized models for perception, prediction, or planning | |
US20210339741A1 (en) | Constraining vehicle operation based on uncertainty in perception and/or prediction | |
US10234864B2 (en) | Planning for unknown objects by an autonomous vehicle | |
US11565716B2 (en) | Method and system for dynamically curating autonomous vehicle policies | |
US11458991B2 (en) | Systems and methods for optimizing trajectory planner based on human driving behaviors | |
US20210406559A1 (en) | Systems and methods for effecting map layer updates based on collected sensor data | |
US20220105959A1 (en) | Methods and systems for predicting actions of an object by an autonomous vehicle to determine feasible paths through a conflicted area | |
KR20150128712A (ko) | 차량 라우팅 및 교통 관리를 위한 차선 레벨 차량 내비게이션 | |
CN114072841A (zh) | 根据图像使深度精准化 | |
KR20220136006A (ko) | 자율 주행 차량의 성능을 평가하기 위한 테스트 시나리오의 선택 | |
US20220161830A1 (en) | Dynamic Scene Representation | |
WO2022151666A1 (en) | Systems, methods, and media for evaluation of trajectories and selection of a trajectory for a vehicle | |
US11465620B1 (en) | Lane generation | |
US20220309521A1 (en) | Computing a vehicle interest index | |
EP3454269A1 (en) | Planning autonomous motion | |
US20240132112A1 (en) | Path-based trajectory prediction | |
US20210405641A1 (en) | Detecting positioning of a sensor system associated with a vehicle | |
US20240028035A1 (en) | Planning autonomous motion | |
US20230154317A1 (en) | Method for controlling a mobility system, data processing device, computer-readable medium, and system for controlling a mobility system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21785278 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022519830 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21785278 Country of ref document: EP Kind code of ref document: A1 |