WO2024087522A1 - Planification de décision de conduite autonome et véhicule autonome - Google Patents

Planification de décision de conduite autonome et véhicule autonome Download PDF

Info

Publication number
WO2024087522A1
WO2024087522A1 PCT/CN2023/086587 CN2023086587W WO2024087522A1 WO 2024087522 A1 WO2024087522 A1 WO 2024087522A1 CN 2023086587 W CN2023086587 W CN 2023086587W WO 2024087522 A1 WO2024087522 A1 WO 2024087522A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
driving
trajectory
route
autonomous driving
Prior art date
Application number
PCT/CN2023/086587
Other languages
English (en)
Chinese (zh)
Inventor
李潇
何毅晨
丁曙光
王乃峥
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2024087522A1 publication Critical patent/WO2024087522A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles

Definitions

  • the embodiments of the present application relate to the field of autonomous driving technology, and in particular to autonomous driving decision planning and autonomous driving vehicles.
  • an autonomous driving vehicle can obtain a manually set driving route and move according to the manually set driving route.
  • the present application provides an autonomous driving decision-making plan and an autonomous driving vehicle, and the technical solution includes the following contents.
  • a method for autonomous driving decision planning comprising:
  • controlling the autonomous driving vehicle In response to the presence of an obstacle vehicle in the environment where the autonomous driving vehicle is located, controlling the autonomous driving vehicle to travel along a trial route, wherein the driving route of the obstacle vehicle conflicts with the driving route of the autonomous driving vehicle;
  • While the autonomous driving vehicle is driving along the trial route, obtaining relevant information of the obstacle vehicle;
  • the autonomous driving decision planning is performed on the autonomous driving vehicle at least according to the driving intention of the obstacle vehicle.
  • an automatic driving decision-making planning device comprising:
  • control module configured to control the autonomous driving vehicle to travel along a trial route in response to the presence of an obstacle vehicle in the environment where the autonomous driving vehicle is located, wherein the route of the obstacle vehicle conflicts with the route of the autonomous driving vehicle;
  • An acquisition module used for acquiring relevant information of the obstacle vehicle during the process of the autonomous driving vehicle driving along the trial route
  • a determination module used to determine the driving intention of the obstacle vehicle according to the relevant information of the obstacle vehicle
  • a planning module is used to perform autonomous driving decision planning for the autonomous driving vehicle at least according to the driving intention of the obstacle vehicle.
  • an autonomous driving vehicle which includes a processor and a memory, wherein the memory stores at least one computer program, and the at least one computer program is loaded and executed by the processor so that the autonomous driving vehicle implements the above-mentioned autonomous driving decision planning method.
  • a non-temporary computer-readable storage medium in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to enable the autonomous driving vehicle to implement any of the above-mentioned autonomous driving decision planning methods.
  • a computer program in which at least one computer instruction is stored, and the at least one computer instruction is loaded and executed by a processor to enable an autonomous driving vehicle to implement any of the above-mentioned autonomous driving decision planning methods.
  • a computer program product in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to enable an autonomous driving vehicle to implement any of the above-mentioned autonomous driving decision planning methods.
  • the autonomous driving vehicle when there is an obstacle vehicle in the environment where the autonomous driving vehicle is located, the autonomous driving vehicle is controlled to drive along the trial route, and the obstacle vehicle is guided to move by actively showing the driving intention of the autonomous driving vehicle, so that the obstacle vehicle can show the driving intention of the obstacle vehicle as soon as possible.
  • the relevant information of the obstacle vehicle is used to determine the driving intention of the obstacle vehicle, so that the autonomous driving vehicle can capture the driving intention of the obstacle vehicle in advance.
  • the autonomous driving vehicle makes autonomous driving decision-making plans based on the driving intention of the obstacle vehicle, it not only improves the intelligence level of the autonomous driving vehicle, but also helps to improve the driving safety of the autonomous driving vehicle.
  • FIG1 is a schematic diagram of an implementation environment of an autonomous driving decision-making planning method provided in an embodiment of the present application
  • FIG2 is a flow chart of an autonomous driving decision-making planning method provided by an embodiment of the present application.
  • FIG3 is a schematic diagram of a space-level vehicle meeting provided by an embodiment of the present application.
  • FIG4 is a schematic diagram of a time-level vehicle meeting provided by an embodiment of the present application.
  • FIG5 is a schematic diagram of a trajectory planning provided in an embodiment of the present application.
  • FIG6 is a schematic diagram of a vehicle motion provided by an embodiment of the present application.
  • FIG7 is a schematic diagram of a framework of an autonomous driving decision-making planning method provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of an autonomous driving decision-making plan provided in an embodiment of the present application.
  • FIG9 is a schematic diagram of the structure of an automatic driving decision-making and planning device provided in an embodiment of the present application.
  • FIG10 is a schematic diagram of the structure of a terminal device provided in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the structure of a server provided in an embodiment of the present application.
  • FIG1 is a schematic diagram of an implementation environment of an autonomous driving decision-making planning method provided in an embodiment of the present application.
  • the implementation environment includes a terminal device 101 and a server 102.
  • the autonomous driving decision-making planning method provided in an embodiment of the present application can be executed by the terminal device 101, or by the server 102, or by the terminal device 101 and the server 102.
  • at least one of the terminal device 101 and the server 102 can be deployed in an autonomous driving vehicle
  • the autonomous driving decision-making planning method provided in an embodiment of the present application is executed by the autonomous driving vehicle.
  • the autonomous driving vehicle can be an automatic car, an automatic electric car, a drone, a robot, or other subject that can travel automatically.
  • the terminal device 101 can be a smart phone, a game console, a desktop computer, a tablet computer, a laptop computer, a smart TV, a smart car device, an intelligent voice interaction device, a smart home appliance, etc.
  • the server 102 can be a single server, or a server cluster consisting of multiple servers, or any one of a cloud computing platform and a virtualization center, which is not limited in the embodiments of the present application.
  • the server 102 can be connected to the terminal device 101 through a wired network or a wireless network.
  • the server 102 can have functions such as data processing, data storage, and data transmission and reception, which are not limited in the embodiments of the present application.
  • the number of terminal devices 101 and servers 102 is not limited and can be one or more.
  • autonomous driving vehicles can obtain manually set driving routes and control the autonomous driving vehicles to move along the manually set driving routes.
  • This autonomous driving decision-making and planning method is relatively simple and difficult to cope with complex actual traffic scenarios. It has a low degree of intelligence, resulting in poor driving safety of autonomous driving vehicles.
  • the embodiment of the present application provides an autonomous driving decision-making planning method, which can be applied to the above-mentioned implementation environment and can improve the driving safety of the autonomous driving vehicle.
  • the method can be executed by an autonomous driving vehicle.
  • it is executed by an autonomous driving vehicle deployed with at least one of a terminal device and a server.
  • the method includes the following steps.
  • Step 201 in response to the presence of an obstacle vehicle in the environment where the autonomous driving vehicle is located, controlling the autonomous driving vehicle to travel along a trial route, where the obstacle vehicle refers to a vehicle that conflicts with the driving route of the autonomous driving vehicle.
  • the autonomous driving vehicle is equipped with at least one sensor, including but not limited to a temperature sensor, an infrared sensor, an image sensor, etc.
  • Each sensor corresponds to a sensing range, and the environment in which the autonomous driving vehicle is located refers to the sensing range of various sensors configured on the autonomous driving vehicle.
  • the autonomous vehicle can sense all the vehicles in the environment where the autonomous vehicle is located.
  • the autonomous vehicle can plan a trial route and drive along the trial route.
  • the driving route of the obstacle vehicle conflicts with the driving route of the autonomous vehicle.
  • the driving direction of the obstacle vehicle can be the same as that of the autonomous driving vehicle, or it can be opposite to the driving direction of the autonomous driving vehicle. That is, the obstacle vehicle and the autonomous driving vehicle can travel in the same direction or in opposite directions.
  • the autonomous driving vehicle may determine the estimated collision time between the autonomous driving vehicle and any vehicle based on the driving route of the autonomous driving vehicle and the driving route of any vehicle in the environment where the autonomous driving vehicle is located. If the estimated collision time is less than the time threshold, then any vehicle is determined to be an obstacle vehicle. Since the estimated collision time between the obstacle vehicle and the autonomous driving vehicle is less than the time threshold, it means that the obstacle vehicle and the autonomous driving vehicle will collide within the time threshold, and therefore, there is a conflict between the driving route of the obstacle vehicle and the driving route of the autonomous driving vehicle.
  • step 205 to step 207 are included before step 201 .
  • Step 205 obtaining the historical actual driving trajectory of the autonomous driving vehicle, the historical actual driving trajectory of the obstacle vehicle, and the historical expected driving trajectory of the obstacle vehicle, wherein the historical expected driving trajectory of the obstacle vehicle is estimated based on the driving route of the obstacle vehicle.
  • the historical expected driving trajectory of the obstacle vehicle is estimated by the autonomous driving vehicle based on the driving route of the obstacle vehicle.
  • the historical expected driving trajectory of the obstacle vehicle is estimated by other devices other than the autonomous driving vehicle based on the driving route of the obstacle vehicle.
  • the autonomous driving vehicle is deployed with a terminal device, and other devices other than the autonomous driving vehicle include but are not limited to a server.
  • the terminal device or the obstacle vehicle reports the driving route of the obstacle vehicle to the server, so that the server estimates the historical expected driving trajectory of the obstacle vehicle based on the driving route of the obstacle vehicle.
  • the automatic driving decision planning for the automatic driving vehicle in the embodiment of the present application is a continuous process. Therefore, the automatic driving decision planning for the automatic driving vehicle can be carried out in a periodic manner to achieve periodic control of the automatic driving vehicle.
  • the automatic driving vehicle can determine the trial route corresponding to the previous time period or the previous several time periods of the current time period as the historical expected driving trajectory of the automatic driving vehicle. Since the automatic driving vehicle drives according to the trial route corresponding to the current time period in the current time period, the actual driving route of the automatic driving vehicle is the same as the trial route.
  • the automatic driving vehicle in the time period before the current time period (i.e., the previous time period or the previous several time periods of the current time period), the automatic driving vehicle also drives according to the trial route corresponding to the previous time period, so the historical actual driving trajectory of the automatic driving vehicle is the same as the trial route corresponding to the previous time period. Since the trial route corresponding to the previous time period can be determined as the historical expected driving trajectory of the automatic driving vehicle, and the historical actual driving trajectory of the automatic driving vehicle is the same as the trial route corresponding to the previous time period, it can be considered that the historical expected driving trajectory of the automatic driving vehicle is also the historical actual driving trajectory of the automatic driving vehicle.
  • the autonomous driving vehicle can sense the actual driving trajectory of the obstacle vehicle through the sensors.
  • the autonomous driving vehicle can use the actual driving trajectory of the obstacle vehicle corresponding to the previous time period or the previous several time periods of the current time period as the historical actual driving trajectory of the obstacle vehicle.
  • the historical actual driving trajectory of the obstacle vehicle is the actual driving trajectory of the obstacle vehicle sensed by the autonomous driving vehicle in the time period before the current time period.
  • the sensors of the autonomous driving vehicle can sense the actual position of the obstacle vehicle multiple times, and each time the actual position is sensed, the sensing time (i.e., the time when the actual position is sensed) can be recorded. Therefore, the historical actual driving trajectory of the obstacle vehicle includes the position information of multiple actual trajectory points and the time information of the obstacle vehicle arriving at each actual trajectory point. Among them, the position information of the actual trajectory point corresponds to the perceived actual position, and the time information of arriving at the actual trajectory point corresponds to the sensing time.
  • the autonomous driving vehicle will conduct joint planning to jointly plan the expected driving trajectory of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle.
  • the expected driving trajectory of the autonomous driving vehicle is the trial route corresponding to the autonomous driving vehicle in the next time period. Therefore, the method for determining the expected driving trajectory of the obstacle vehicle can be found in the description of determining the trial route.
  • the expected driving trajectory of the obstacle vehicle is the trial route corresponding to the obstacle vehicle in the next time period, which will not be repeated here.
  • the expected driving trajectory of the autonomous driving vehicle includes the position information of multiple expected trajectory points and the time information when the autonomous driving vehicle reaches each expected trajectory point.
  • the expected driving trajectory of the obstacle vehicle includes the position information of multiple expected trajectory points and the time information when the obstacle vehicle reaches each expected trajectory point.
  • the autonomous driving vehicle may determine the expected driving trajectory of the obstacle vehicle corresponding to the previous time period or the previous several time periods before the current time period as the historical expected driving trajectory of the obstacle vehicle.
  • the historical expected driving trajectory of the obstacle vehicle is the expected driving trajectory of the obstacle vehicle corresponding to the time period before the current time period.
  • Step 206 determining historical deviation information based on the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle.
  • the autonomous driving vehicle plans the historical expected driving trajectory of the obstacle vehicle.
  • the autonomous driving vehicle expects the obstacle vehicle to move according to the historical expected driving trajectory of the obstacle vehicle, but in fact the obstacle vehicle moves according to its own driving intention. Therefore, there is a certain difference between the actual historical driving trajectory of the obstacle vehicle and the expected historical driving trajectory of the obstacle vehicle.
  • the autonomous driving vehicle may determine historical deviation information based on the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle, so as to quantify the difference between the historical actual driving trajectory of the obstacle vehicle and the historical expected driving trajectory of the obstacle vehicle through the historical deviation information.
  • the value of the historical deviation information is proportional to the size of the difference, that is, the larger the value of the historical deviation information, the larger the difference between the historical actual driving trajectory of the obstacle vehicle and the historical expected driving trajectory of the obstacle vehicle.
  • step 206 includes steps 2061 and 2062.
  • Step 2061 for any obstacle vehicle, determine the deviation information between the historical expected driving trajectory of any obstacle vehicle and the historical actual driving trajectory of any obstacle vehicle.
  • the autonomous driving vehicle can determine the deviation information between the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle, and record the deviation information as the deviation information of the obstacle vehicle.
  • the difference between the historical actual driving trajectory of the obstacle vehicle and the historical expected driving trajectory of the obstacle vehicle can be quantified through the deviation information of the obstacle vehicle.
  • the numerical value of the deviation information of the obstacle vehicle is proportional to the size of the difference, that is, the larger the numerical value of the deviation information of the obstacle vehicle, the greater the difference between the historical actual driving trajectory of the obstacle vehicle and the historical expected driving trajectory of the obstacle vehicle.
  • step 2061 includes: determining the spatial deviation information corresponding to the obstacle vehicle based on the position information of multiple expected trajectory points and the position information of multiple actual trajectory points; determining the time deviation information corresponding to the obstacle vehicle based on the time information of the obstacle vehicle arriving at each expected trajectory point and the time information of the obstacle vehicle arriving at each actual trajectory point; determining the deviation information of the obstacle vehicle based on the spatial deviation information and time deviation information corresponding to the obstacle vehicle.
  • the spatial deviation information corresponding to the obstacle vehicle can be determined based on the position information of multiple expected trajectory points and the position information of multiple actual trajectory points.
  • the spatial deviation information corresponding to the obstacle vehicle can reflect the difference between the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle at the spatial level.
  • implementation method A1 Two methods of determining the spatial deviation information corresponding to the obstacle vehicle, namely, implementation method A1 and implementation method A2, are provided below.
  • Implementation A1 For any expected trajectory point, based on the position information of the expected trajectory point and the position information of multiple actual trajectory points, the distance between the expected trajectory point and each actual trajectory point is calculated, and the minimum distance corresponding to the expected trajectory point is determined from the distances between the expected trajectory point and each actual trajectory point.
  • the spatial deviation information corresponding to the obstacle vehicle is calculated based on the minimum distances corresponding to the multiple expected trajectory points, for example, the minimum distances corresponding to the multiple expected trajectory points are weighted averaged to obtain the spatial deviation information corresponding to the obstacle vehicle.
  • implementation A1 calculates the spatial deviation information corresponding to the obstacle vehicle based on the minimum distance corresponding to multiple expected trajectory points.
  • the distance between any actual trajectory point and each expected trajectory point can be calculated first, and then the minimum distance corresponding to the actual trajectory point can be determined. After that, the spatial deviation information corresponding to the obstacle vehicle can be calculated based on the minimum distance corresponding to multiple actual trajectory points.
  • Implementation A2 based on the position information of multiple expected trajectory points, the spatial characteristics of the historical expected driving trajectory of the obstacle vehicle are determined. Based on the position information of multiple actual trajectory points, the spatial characteristics of the historical actual driving trajectory of the obstacle vehicle are determined. After that, the characteristic distance between the spatial characteristics of the historical expected driving trajectory of the obstacle vehicle and the spatial characteristics of the historical actual driving trajectory of the obstacle vehicle is calculated, and the spatial deviation information corresponding to the obstacle vehicle is obtained based on the characteristic distance, such as mapping the characteristic distance to obtain the spatial deviation information corresponding to the obstacle vehicle.
  • the spatial characteristics of the historical expected driving trajectory of the obstacle vehicle can be expressed as a feature vector, and the spatial characteristics of the historical driving trajectory of the obstacle vehicle can be expressed as another feature vector.
  • the feature distance between the two feature vectors is calculated.
  • the mapping relationship between the feature distance and the spatial deviation information is queried based on the feature distance to obtain the spatial deviation information mapped to the feature distance.
  • the queried spatial deviation information is the spatial deviation information corresponding to the obstacle vehicle.
  • implementation methods can determine the time deviation information corresponding to the barrier vehicle based on the time information of the barrier vehicle reaching each expected trajectory point and the time information of the barrier vehicle reaching each actual trajectory point.
  • the time deviation information corresponding to the barrier vehicle can reflect the difference between the historical expected driving trajectory of the barrier vehicle and the historical actual driving trajectory of the barrier vehicle at the time level. The following provides two methods for determining the time deviation information corresponding to the barrier vehicle, implementation method B1 and implementation method B2.
  • the time information of the obstacle vehicle reaching the expected trajectory point is related to the position information of the expected trajectory point
  • the time information of the obstacle vehicle reaching the actual trajectory point is related to the position information of the actual trajectory point.
  • Implementation method B1 for any expected trajectory point, based on the position information of the expected trajectory point and the position information of multiple actual trajectory points, calculate the distance between the expected trajectory point and each actual trajectory point, determine the minimum distance corresponding to the expected trajectory point from the distance between the expected trajectory point and each actual trajectory point, and thus determine the actual trajectory point corresponding to the minimum distance. Based on the time information of the obstacle vehicle arriving at the expected trajectory point and the time information of the actual trajectory point corresponding to the minimum distance corresponding to the expected trajectory point (for example, calculating the time difference between the two time information), the time deviation information corresponding to the expected trajectory point is obtained. In other words, when the obstacle vehicle arrives at the expected trajectory point, a time information is generated.
  • the minimum distance corresponding to the expected trajectory point is determined above, and the minimum distance corresponds to an actual trajectory point, a time information is also generated when the obstacle vehicle arrives at the actual trajectory point corresponding to the minimum distance, and two time information are generated in total.
  • the embodiment of the present application can obtain the time deviation information corresponding to the expected trajectory point based on the two time information. Afterwards, based on the time deviation information corresponding to multiple expected trajectory points, the time deviation information corresponding to the obstacle vehicle is determined, for example, the time deviation information corresponding to multiple expected trajectory points is weighted averaged to obtain the time deviation information corresponding to the obstacle vehicle.
  • implementation method B1 calculates the time deviation information corresponding to the obstacle vehicle based on the time deviation information corresponding to multiple expected trajectory points.
  • the distance between any actual trajectory point and each expected trajectory point can be calculated first, and then the minimum distance corresponding to the actual trajectory point can be determined, thereby determining the expected trajectory point corresponding to the minimum distance, so as to calculate the time deviation information corresponding to the actual trajectory point.
  • the time deviation information corresponding to the obstacle vehicle is calculated based on the time deviation information corresponding to multiple actual trajectory points.
  • Implementation method B2 based on the position information of multiple expected trajectory points and the time information of the obstacle vehicle arriving at each expected trajectory point, the time characteristics of the historical expected driving trajectory of the obstacle vehicle are determined. Based on the position information of multiple actual trajectory points and the time information of the obstacle vehicle arriving at each actual trajectory point, the time characteristics of the historical actual driving trajectory of the obstacle vehicle are determined. Afterwards, the characteristic distance between the time characteristics of the historical expected driving trajectory of the obstacle vehicle and the time characteristics of the historical actual driving trajectory of the obstacle vehicle is calculated, and the time deviation information corresponding to the obstacle vehicle is obtained based on the characteristic distance, such as mapping the characteristic distance to obtain the time deviation information corresponding to the obstacle vehicle.
  • the time characteristics of the historical expected driving trajectory of the obstacle vehicle and the time characteristics of the historical actual driving trajectory of the obstacle vehicle can be represented by two feature vectors.
  • the embodiment of the present application can calculate the feature distance between the two feature vectors, query the mapping relationship between the feature distance and the spatial deviation information based on the feature distance, and obtain the time deviation information mapped to the feature distance.
  • the time deviation information obtained by the query is the time deviation information corresponding to the obstacle vehicle.
  • the spatial deviation information corresponding to the obstacle vehicle and the time deviation information corresponding to the obstacle vehicle are weighted summed, weighted averaged, etc. to obtain the deviation information of the obstacle vehicle.
  • the deviation information of the obstacle vehicle can reflect the difference between the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle at the spatiotemporal level. Since the deviation information of the obstacle vehicle is determined based on the spatial deviation information corresponding to the obstacle vehicle and the temporal deviation information corresponding to the obstacle vehicle, the embodiment of the present application decouples the deviation information of the obstacle vehicle into the difference at the spatial level and the difference at the temporal level.
  • Figure 3 is a schematic diagram of a spatial level meeting provided by an embodiment of the present application.
  • Figure 3 shows vehicle A and vehicle B on the road, and the movement direction of vehicle A is opposite to the movement direction of vehicle B.
  • vehicle A and vehicle B meet, one optional method is that vehicle A moves along trajectory A-m1 and vehicle B moves along trajectory B-m1, and another optional method is that vehicle A moves along trajectory A-m2 and vehicle B moves along trajectory B-m2. Therefore, for vehicle A, at the spatial level, vehicle A can move along trajectory A-m1 or along trajectory A-m2, and this movement difference is the difference at the spatial level.
  • FIG. 4 is a schematic diagram of a time-level meeting provided by an embodiment of the present application.
  • FIG. 4 shows vehicles A, B and C on the road, wherein the movement direction of vehicle A is opposite to that of vehicle B, while vehicle C is stationary, or the movement direction of vehicle C is the same as that of vehicle A.
  • Vehicle B moves along the trajectory B-m.
  • one optional method is that vehicle A moves along the trajectory A-m1. In this case, vehicle A meets vehicle B behind vehicle C.
  • Another optional method is that vehicle A moves along the trajectory A-m2. In this case, vehicle A meets vehicle B after passing vehicle C. Therefore, for vehicle A, at the time level, vehicle A can move along the trajectory A-m1 or along the trajectory A-m2. This movement difference is the difference at the time level.
  • the historical expected driving trajectory of the obstacle vehicle and the obstacle vehicle are calculated at the spatial level and the temporal level respectively.
  • the difference between the historical actual driving trajectories and the deviation information of the obstacle vehicle is calculated based on the differences in spatial and temporal levels, which is conducive to improving the accuracy of the deviation information of the obstacle vehicle, so that the autonomous driving vehicle can plan a more accurate expected trajectory and improve the driving safety of the autonomous driving vehicle.
  • the historical expected driving trajectory of any barrier vehicle includes expected trajectory points at multiple moments
  • the historical actual driving trajectory of any barrier vehicle includes actual trajectory points at multiple moments.
  • step 2061 includes: for any moment, based on the position information of the expected trajectory point at any moment and the position information of the actual trajectory point at any moment, determining the distance between the expected trajectory point and the actual trajectory point corresponding to any moment; based on the distance between the expected trajectory point and the actual trajectory point corresponding to each moment, determining the deviation information between the historical expected driving trajectory of any barrier vehicle and the historical actual driving trajectory of any barrier vehicle.
  • an expected trajectory point corresponds to a time (or moment), that is, an expected trajectory point can be understood as an expected trajectory point at a moment.
  • the historical actual driving trajectory of any obstacle vehicle includes the position information of multiple actual trajectory points and the time information of the obstacle vehicle arriving at each actual trajectory point, so an actual trajectory point can be understood as an actual trajectory point at a moment.
  • the distance between the expected trajectory point and the actual trajectory point corresponding to that moment is calculated according to the distance formula.
  • the distance between the expected trajectory point and the actual trajectory point corresponding to each moment is averaged, summed, etc. to obtain the calculation result, and the calculation result is used as the deviation information of the obstacle vehicle, or the calculation result is mapped to the deviation information of the obstacle vehicle.
  • the deviation information of the obstacle vehicle is proportional to the calculation result, that is, the larger the calculation result, the larger the deviation information of the obstacle vehicle, and the greater the difference between the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle.
  • Step 2062 Determine historical deviation information based on deviation information between historical expected driving trajectories of each obstacle vehicle and historical actual driving trajectories of each obstacle vehicle.
  • the deviation information between the historical expected driving trajectory of any obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle can be recorded as the deviation information of the obstacle vehicle.
  • the deviation information of each obstacle vehicle can be calculated by weighted average, weighted sum, etc. to obtain the historical deviation information, which can also be called the deviation of Bayesian equilibrium.
  • Step 207 determine a trial route based on the historical actual driving trajectory and historical deviation information of the autonomous driving vehicle.
  • the historical actual driving trajectory of the autonomous driving vehicle is a trial route corresponding to a time period before the current time period.
  • the autonomous driving vehicle can perform joint planning based on the historical actual driving trajectory of the autonomous driving vehicle and the historical deviation information to plan a first joint route for the current time period, wherein the first joint route includes the trial route.
  • step 207 includes steps 2071 to 2074 .
  • Step 2071 in response to the historical deviation information being less than the first threshold, determining at least one first candidate route of the obstacle vehicle and at least one first candidate route of the autonomous driving vehicle based on the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle.
  • the first threshold is a set value.
  • the first threshold is a set value.
  • the historical deviation information is less than the first threshold, it means that the difference between the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle is small, which meets the expectations of the autonomous driving vehicle.
  • the autonomous driving vehicle can obtain the driving intention of the autonomous driving vehicle by analyzing the historical actual driving trajectory of the autonomous driving vehicle. By analyzing the historical actual driving trajectory of the obstacle vehicle, the driving intention of the obstacle vehicle can be obtained, thereby obtaining the driving intention of all subjects in the environment where the autonomous driving vehicle is located, where all subjects include autonomous driving vehicles and obstacle vehicles. Based on the driving intentions of all subjects in the environment where the autonomous driving vehicle is located, at least one first candidate route for the obstacle vehicle and at least one first candidate route for the autonomous driving vehicle can be planned.
  • FIG. 5 is a schematic diagram of trajectory planning provided by an embodiment of the present application.
  • the autonomous driving vehicle is vehicle A, and the historical actual driving trajectory of vehicle A is A-m'; the obstacle vehicles include vehicle B and vehicle C, wherein the historical actual driving trajectory of vehicle B is B-m', and the historical actual driving trajectory of vehicle C is stationary.
  • vehicle A obtains that the driving intention of vehicle A is to move forward, and by analyzing B-m', the driving intention of vehicle B is also to move forward, and by analyzing the historical actual driving trajectory of vehicle C, the driving intention of vehicle C is to be stationary.
  • vehicle A can plan that the first candidate route of vehicle A includes A-m1 and A-m2, the first candidate route of vehicle B is B-m, and the first candidate route of vehicle C is stationary.
  • step 2071 includes: determining the trajectory point distribution information of the obstacle vehicle and the trajectory point distribution information of the autonomous driving vehicle based on the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle; for the target subject, generating multiple trajectory points of the target subject based on the trajectory point distribution information of the target subject, where the target subject is the obstacle vehicle or the autonomous driving vehicle; sampling multiple target trajectory points of the target subject from the multiple trajectory points of the target subject; and sampling multiple target trajectory points of the target subject based on the multiple trajectory points of the target subject.
  • the target trajectory point generates at least one first candidate route of the target body.
  • a sampling process is performed for multiple trajectory points of the target subject, thereby obtaining multiple target trajectory points of the target subject, and a first candidate route of the target subject is generated based on the multiple target trajectory points of the target subject.
  • multiple sampling processes are performed for multiple trajectory points of the target subject, and a first candidate route can be obtained through each sampling process, thereby generating multiple first candidate routes of the target subject. In this way, at least one first candidate route of the target subject can be obtained. That is, at least one first candidate route of the autonomous driving vehicle is obtained, and at least one first candidate route of the obstacle vehicle is obtained.
  • the driving intentions of all subjects in the environment can be determined by analyzing the historical actual driving trajectories of the autonomous driving vehicle and the historical actual driving trajectories of the obstacle vehicle. Based on the driving intentions of all subjects in the environment and the historical actual driving trajectories of the obstacle vehicle, the trajectory point distribution information of the obstacle vehicle can be determined, wherein the trajectory point distribution information of the obstacle vehicle can reflect the distribution satisfied by the trajectory points of the obstacle vehicle, for example, the trajectory points of the obstacle vehicle satisfy the Gaussian distribution.
  • the trajectory point distribution information of the autonomous driving vehicle can be determined, wherein the trajectory point distribution information of the autonomous driving vehicle can reflect the distribution satisfied by the trajectory points of the autonomous driving vehicle.
  • the obstacle vehicle is taken as the target subject, or the autonomous driving vehicle is taken as the target subject.
  • the target subject since the trajectory point distribution information of the target subject can reflect the distribution satisfied by the trajectory points of the target subject, multiple trajectory points of the target subject can be generated based on the trajectory point distribution information of the target subject, and the position information of these multiple trajectory points satisfies the distribution.
  • the first target trajectory point is sampled from the multiple trajectory points of the target subject based on the historical actual driving trajectory of the target subject, and the distance between the first target trajectory point and the last actual trajectory point in the historical actual driving trajectory is less than the distance threshold.
  • the next target trajectory point is sampled from the multiple trajectory points of the target subject based on the historical actual driving trajectory of the target subject and the sampled target trajectory points, and the distance between the next target trajectory point and the last target trajectory point sampled is less than the distance threshold, until the loop termination condition is reached.
  • the distance threshold is a settable value, and it can also be a value determined based on information such as the acceleration and speed of the target subject.
  • the time information of the target subject arriving at each target trajectory point is determined based on the unit time.
  • the difference between the time information of the target subject arriving at two adjacent target trajectory points is the unit time, such as at least one unit time.
  • the time information of the target subject arriving at each target trajectory point is determined. In this case, the difference between the time information of the target subject arriving at two adjacent target trajectory points, the distance between the two adjacent target trajectory points, the acceleration and speed of the target subject and other information satisfy the kinematic formula.
  • a first candidate route of the target subject includes multiple target trajectory points of the target subject and the time information of the target subject reaching each target trajectory point. In the above manner, at least one first candidate route of the obstacle vehicle and at least one first candidate route of the autonomous driving vehicle can be obtained.
  • Step 2072 Combine at least one first candidate route of the obstacle vehicle and at least one first candidate route of the autonomous driving vehicle to obtain at least one first combined route, wherein any first combined route includes a first candidate route of the obstacle vehicle and a first candidate route of the autonomous driving vehicle.
  • a first candidate route for the obstacle vehicle is randomly sampled from at least one first candidate route for the obstacle vehicle; a first candidate route for the autonomous driving vehicle is randomly sampled from at least one first candidate route for the autonomous driving vehicle.
  • the sampled first candidate route for the obstacle vehicle and the sampled first candidate route for the autonomous driving vehicle are regarded as the first combined route. In this way, at least one first combined route can be determined.
  • Step 2073 determining the recommended index for each first combination route.
  • a recommendation index function can be set, and the recommendation index function can evaluate the quality of any first combination route to obtain the recommendation index of the first combination route, that is, the recommendation index function is used to determine the recommendation index of the first combination route.
  • the recommendation index function when evaluating the quality of the first combination route, not only the efficiency of the first candidate route of the autonomous driving vehicle should be considered, but also the safety of the first candidate route of the obstacle vehicle should be evaluated with reference to the first candidate route of the obstacle vehicle.
  • the larger the recommendation index of the first combination route the better the first combination route is, and the better the balance between driving efficiency and driving safety when the autonomous driving vehicle moves based on the first candidate route of the autonomous driving vehicle in the first combination route.
  • the embodiment of the present application selects the first combination route with a larger recommendation index, the first combination route is better, and the first candidate route of the autonomous driving vehicle included in the first combination route is also better. If the autonomous driving vehicle moves according to the first candidate route of the autonomous driving vehicle, it can better balance the driving efficiency and driving safety of the autonomous driving vehicle.
  • step 2073 includes: determining parameter distribution information of a recommended index function based on the historical actual driving trajectory of the obstacle vehicle, the recommended index function being used to determine the recommended index of the first combined route; generating multiple candidate parameters of the recommended index function based on the parameter distribution information of the recommended index function; sampling target parameters of the recommended index function from the multiple candidate parameters of the recommended index function; and determining the recommended index of each first combined route based on the target parameters of the recommended index function.
  • the autonomous driving vehicle may first analyze the historical expected driving trajectory and the historical actual driving trajectory of the obstacle vehicle to determine the historical deviation information, wherein the method for determining the historical deviation information has been described above and will not be repeated here.
  • the parameter distribution information of the recommended index function is determined based on the value of the historical deviation information.
  • the parameter distribution information of the recommended index function can reflect the distribution satisfied by the parameters of the recommended index function, for example, the parameters of the recommended index function satisfy the Gaussian distribution.
  • the parameter distribution information of the recommended index function can reflect the distribution satisfied by the parameters of the recommended index function
  • multiple candidate parameters of the recommended index function can be generated based on the parameter distribution information of the recommended index function, and the values of the multiple candidate parameters satisfy the distribution.
  • the target parameter of the recommended index function is sampled from the multiple candidate parameters of the recommended index function.
  • the target parameter can be used to balance the driving safety and driving efficiency of the autonomous driving vehicle.
  • the target parameter of the recommended index function corresponding to the previous time period can be obtained.
  • the difference between the target parameter corresponding to the previous time period and the candidate parameter is calculated to obtain the difference corresponding to the candidate parameter.
  • the difference corresponding to each candidate parameter can be determined, and the candidate parameter corresponding to the difference that satisfies the difference condition is used as the target parameter of the recommended index function corresponding to the current time period.
  • the embodiment of the present application does not limit the difference that satisfies the difference condition.
  • the difference that satisfies the difference condition is the minimum difference.
  • the recommendation index function corresponding to the current time period may be determined, thereby determining the recommendation index of each first combination route using the recommendation index function of the current time period.
  • determining the recommended index of each first combination route based on the target parameter of the recommended index function includes: for any first combination route, obtaining at least one reference information of any first combination route, any reference information is any one of comfort, safety, speed of the autonomous driving vehicle, uncertainty, politeness and circulation, comfort is used to describe acceleration, safety is used to describe collision information, uncertainty is used to describe the concentration of trajectory points, politeness is used to describe the impact of the autonomous driving vehicle on the movement of the obstacle vehicle, and circulation is used to describe the average speed of vehicles in the environment where the autonomous driving vehicle is located; determining the recommended index of any first combination route based on each reference information of any first combination route and the target parameter of the recommended index function corresponding to each reference information (also referred to as each reference information).
  • any first combined route corresponds to at least one of reference information such as comfort, safety, speed of the autonomous driving vehicle, uncertainty, politeness, and circulation.
  • comfort is used to describe the acceleration of the autonomous driving vehicle and/or the obstacle vehicle.
  • the comfort includes the acceleration of the autonomous driving vehicle and the acceleration of the obstacle vehicle, or the comfort includes the jerk of the autonomous driving vehicle and the jerk of the obstacle vehicle.
  • the jerk here can be expressed by the first-order derivative of acceleration, that is, the second-order derivative of velocity, which refers to the acceleration of acceleration.
  • Safety is used to describe the collision information between the autonomous driving vehicle and the obstacle vehicle. Since the first combined route includes a first candidate route for the obstacle vehicle and a first candidate route for the autonomous driving vehicle, the autonomous driving vehicle can estimate the collision information between the autonomous driving vehicle and the obstacle vehicle based on the first candidate route for the autonomous driving vehicle and the first candidate route for the obstacle vehicle.
  • the collision information includes the collision time and the collision distance.
  • Uncertainty is used to describe the concentration of the trajectory points of the target subject, which is an obstacle vehicle or an autonomous driving vehicle. That is, uncertainty is used to describe the concentration of the trajectory points of the obstacle vehicle and/or the autonomous driving vehicle. The more concentrated the trajectory points are, the smaller the uncertainty is.
  • the trajectory point distribution information of the obstacle vehicle satisfies the Gaussian distribution
  • the trajectory point distribution information of the autonomous driving vehicle also satisfies the Gaussian distribution.
  • the sum of the variances or the average value of the two Gaussian distributions can be used as uncertainty.
  • the politeness level is used to describe the impact of the autonomous driving vehicle on the movement of the obstacle vehicle.
  • the first combined route includes a first candidate route for the obstacle vehicle and a first candidate route for the autonomous driving vehicle.
  • the politeness level can be determined based on the first candidate route for the obstacle vehicle and the first candidate route for the autonomous driving vehicle.
  • the politeness level is a parameter that measures the amplitude of the obstacle vehicle's movement.
  • the obstacle vehicle movement here refers to the movement performed by the obstacle vehicle to avoid collision with the autonomous driving vehicle.
  • the obstacle vehicle movement can be deceleration.
  • the politeness level can measure the amplitude of deceleration. Among them, the larger the amplitude of the obstacle vehicle's movement, the greater the politeness level.
  • the circulation degree is used to describe the average speed of vehicles in the environment of the autonomous vehicle. Therefore, the circulation degree of the autonomous vehicle can be calculated.
  • the average speed of the autonomous driving vehicles and the average speed of the obstacle vehicles are taken as the circulation degree.
  • the weighted sum of each reference information of the first combined route and the target parameter of the recommendation index function corresponding to each reference information is calculated to obtain the recommendation index of the first combined route.
  • the recommendation index of the first combined route comfort * coefficient 1 + safety * coefficient 2 + speed of the autonomous driving vehicle * coefficient 3 + uncertainty * coefficient 4 + politeness * coefficient 5 + circulation * coefficient 6.
  • coefficient 1 is the target parameter of the recommendation index function corresponding to comfort.
  • Coefficient 2 is the target parameter of the recommendation index function corresponding to safety.
  • Coefficient 3 is the target parameter of the recommendation index function corresponding to the speed of the autonomous driving vehicle.
  • Coefficient 4 is the target parameter of the recommendation index function corresponding to uncertainty.
  • Coefficient 5 is the target parameter of the recommendation index function corresponding to politeness.
  • Coefficient 6 is the target parameter of the recommendation index function corresponding to circulation.
  • Step 2074 Select a first combined route with a highest recommendation index from at least one first combined route, and use the first candidate route of the autonomous driving vehicle included in the first combined route with the highest recommendation index as a trial route.
  • At least one first combination route can be sorted in descending order according to the recommended index to obtain the sorted first combination routes.
  • the first sorted first combination route is used as the first joint route.
  • at least one first combination route can also be sorted in descending order according to the recommended index to obtain the sorted first combination routes.
  • the last sorted first combination route is used as the first joint route.
  • the first joint route includes a first candidate route for the obstacle vehicle and a first candidate route for the autonomous driving vehicle, wherein the first candidate route for the obstacle vehicle is the expected driving trajectory corresponding to the obstacle vehicle in the current time period, and the first candidate route for the autonomous driving vehicle is the trial route corresponding to the autonomous driving vehicle in the current time period.
  • meeting traffic rules includes that the autonomous driving vehicle and the obstacle vehicle on the road cannot collide, so there is no intersection between the trial route of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle.
  • meeting traffic rules includes taking the driving direction of the obstacle vehicle as the positive direction and the obstacle vehicle driving close to the right side of the road, so both the trial route of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle meet the requirement of being close to the right side of the road.
  • step 207 includes steps 2075 to 2077 .
  • Step 2075 In response to the historical deviation information being not less than the first threshold, obtaining at least one mapping relationship, any one of which is used to describe the mapping relationship between the driving trajectory set and the reference route, and the driving trajectory set includes at least one driving trajectory.
  • the autonomous driving vehicle can directly determine the trial route of the autonomous driving vehicle in the current time period.
  • the autonomous driving vehicle may be configured with at least one mapping relationship.
  • the autonomous driving vehicle may call each mapping relationship. Any mapping relationship is used to describe the mapping relationship between a driving trajectory set and a reference route, and the driving trajectory set includes at least one driving trajectory.
  • the mapping relationship is used to describe the mapping relationship between which driving trajectory set and the reference route, or in other words, which driving trajectory set and reference route the mapping relationship includes, then which driving trajectory set is the driving trajectory set corresponding to the mapping relationship, and which reference route is the reference route corresponding to the mapping relationship.
  • Step 2076 Select a target mapping relationship from at least one mapping relationship, in which the driving trajectory set matches the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle.
  • the autonomous driving vehicle can match each mapping relationship with a historical actual trajectory set, where the historical actual trajectory set includes the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle.
  • the mapping relationship is determined as the target mapping relationship.
  • the embodiment of the present application does not limit the way in which the driving trajectory set is matched with the historical actual trajectory set. Exemplarily, if each driving trajectory in the driving trajectory set is the same as each historical actual driving trajectory in the historical actual trajectory set, the driving trajectory set matches the historical actual trajectory set.
  • Step 2077 determine the reference route corresponding to the target mapping relationship as the trial route.
  • the target mapping relationship describes the mapping relationship between the driving trajectory set and the reference route.
  • the reference route corresponding to the target mapping relationship can be determined as the trial route of the autonomous driving vehicle.
  • the method further includes: determining at least one second candidate route of the obstacle vehicle based on the trial route and the historical actual driving trajectory of the obstacle vehicle; combining any second candidate route of the obstacle vehicle with the trial route to obtain any second candidate route of the obstacle vehicle; a second combination route; determining the recommended index of each second combination route, and selecting the second combination route with the highest recommended index from each second combination route.
  • the autonomous vehicle can obtain the driving intention of the autonomous vehicle by analyzing the trial route of the autonomous vehicle, and can obtain the driving intention of the obstacle vehicle by analyzing the historical actual driving trajectory of the obstacle vehicle, thereby obtaining the driving intention of all subjects in the environment where the autonomous vehicle is located. Based on the driving intention of all subjects in the environment, at least one second candidate route of the obstacle vehicle can be planned. Among them, the generation principle of the second candidate route of the obstacle vehicle is similar to the generation principle of the first candidate route of the target subject, which can be referred to the description in step 2071 and will not be repeated here.
  • a second candidate route for the obstacle vehicle is randomly sampled from at least one second candidate route for the obstacle vehicle.
  • the trial route of the autonomous driving vehicle and the sampled second candidate route for the obstacle vehicle are regarded as a second combined route. In this way, at least one second combined route can be determined.
  • the recommended index function can be used to determine the recommended index of the second combined route.
  • the larger the recommended index of the second combined route the better the second combined route.
  • the safety of the autonomous driving vehicle and the obstacle vehicle is higher.
  • the method for determining the recommended index of the second combined route is similar to the method for determining the recommended index of the first combined route. See the description in step 2073 above, which will not be repeated here.
  • At least one second combined route may be sorted in descending order according to the recommended index to obtain the sorted second combined routes.
  • the first sorted second combined route is used as the first joint route.
  • at least one second combined route may also be sorted in descending order according to the recommended index to obtain the sorted second combined routes.
  • the last sorted second combined route is used as the first joint route.
  • the first joint route includes a trial route of the obstacle vehicle and a second candidate route of the autonomous driving vehicle, and the second candidate route of the obstacle vehicle is the expected driving trajectory corresponding to the obstacle vehicle in the current time period.
  • Step 202 while the autonomous driving vehicle is driving along the trial route, obtain relevant information of the obstacle vehicle.
  • the autonomous driving vehicle may obtain a first combined route, which includes the trial route of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle.
  • the obstacle vehicle may be tentatively guided to move according to the expected driving trajectory of the obstacle vehicle, so that the obstacle vehicle may show the driving intention of the obstacle vehicle as soon as possible.
  • the autonomous driving vehicle is vehicle A
  • the trial route of the autonomous driving vehicle is A-m
  • the obstacle vehicle is vehicle B.
  • the autonomous driving vehicle is controlled to move according to A-m, so as to tentatively guide the obstacle vehicle to move according to the expected driving trajectory B-m1 of the obstacle vehicle.
  • the obstacle vehicle can express its driving intention more quickly, so that the autonomous driving vehicle can make decisions earlier and improve the driving safety of the autonomous driving vehicle.
  • the actual driving trajectory of the obstacle vehicle B in the current time period is B-m2, that is, the obstacle vehicle B first approaches the right side of itself from the middle of the road, then approaches the left side of itself, and then keeps going straight.
  • the autonomous driving vehicle A also needs to approach the left side of itself.
  • the autonomous driving vehicle A is controlled to move continuously according to A-m in the current time period.
  • the autonomous driving vehicle Even if the obstacle vehicle B approaches the left side of itself, the autonomous driving vehicle will not change the direction of movement, thereby avoiding the phenomenon that the autonomous driving vehicle is blocked in place due to blindly compromising with the obstacle vehicle, and the autonomous driving vehicle produces violent shaking, etc., and improves the movement efficiency and anti-noise performance of the autonomous driving vehicle.
  • the sensors of the autonomous driving vehicle can sense the actual position of the obstacle vehicle multiple times in the current time period, and the time of sensing each time can be obtained. Therefore, the autonomous driving vehicle can obtain the actual driving trajectory of the obstacle vehicle in the current time period, and the actual driving trajectory of the obstacle vehicle includes the position information of multiple actual trajectory points and the time information of the obstacle vehicle arriving at each actual trajectory point.
  • the actual driving trajectory of the obstacle vehicle is the relevant information of the obstacle vehicle.
  • Step 203 determining the driving intention of the obstacle vehicle according to the relevant information of the obstacle vehicle.
  • the autonomous driving vehicle can determine the driving intention of the obstacle vehicle by analyzing its actual driving trajectory.
  • the driving intention of the obstacle vehicle reflects the movement trend of the obstacle vehicle. For example, if the obstacle vehicle tends to slow down and turn left, the driving intention of the obstacle vehicle can reflect the information of slowing down and turning left.
  • step 203 includes: determining the intention of the obstacle vehicle in the time dimension according to the relevant information of the obstacle vehicle; determining the intention of the obstacle vehicle in the space dimension according to the relevant information of the obstacle vehicle; and determining the intention of the obstacle vehicle in the time dimension and the intention of the obstacle vehicle in the space dimension as the driving intention of the obstacle vehicle.
  • the autonomous driving vehicle can determine the location of the obstacle vehicle by analyzing the actual driving trajectory of the obstacle vehicle.
  • the intention of the obstacle car in the time dimension can reflect the trend of the obstacle car's movement speed. In more popular terms, the intention of the obstacle car in the time dimension can reflect whether the obstacle car will accelerate, maintain a constant speed, or slow down.
  • the autonomous vehicle can determine the intention of the obstacle vehicle in the spatial dimension, and the intention of the obstacle vehicle in the spatial dimension can reflect the trend of the obstacle vehicle's movement direction.
  • the intention of the obstacle vehicle in the spatial dimension can reflect whether the obstacle vehicle will drive on the left side of the road or on the right side of the road next.
  • the intention of the obstacle vehicle in the time dimension and the intention of the obstacle vehicle in the space dimension can be combined to obtain the driving intention of the obstacle vehicle. Therefore, the driving intention of the obstacle vehicle can reflect the trend of the movement direction and movement speed of the obstacle vehicle. For example, the driving intention of the obstacle vehicle can reflect that the obstacle vehicle will accelerate and approach the left side of the road next. In this case, the obstacle vehicle will quickly approach the left side of the road; or, the driving intention of the obstacle vehicle can reflect that the obstacle vehicle will decelerate and approach the left side of the road next. In this case, the obstacle vehicle will slowly approach the left side of the road.
  • Step 204 performing autonomous driving decision planning for the autonomous driving vehicle at least based on the driving intention of the obstacle vehicle.
  • the autonomous driving vehicle can make autonomous driving decision planning according to the driving intention of the obstacle vehicle to plan a second joint route, which includes the trial route of the autonomous driving vehicle in the next time period and the expected driving trajectory of the obstacle vehicle in the next time period.
  • the second joint route is determined in a similar manner to the first joint route, which will not be described in detail here.
  • step 204 includes: in response to a change in the driving intention of the obstacle vehicle, determining a target driving route of the obstacle vehicle at least according to the driving intention of the obstacle vehicle; and determining a target driving route of the autonomous driving vehicle based on the target driving route of the obstacle vehicle.
  • the autonomous driving vehicle can obtain the driving intention of the obstacle vehicle in the historical time period, wherein the driving intention of the obstacle vehicle in the historical time period is obtained by analyzing the historical actual driving trajectory of the obstacle vehicle, and the historical time period is the time period before the current time period. By comparing the driving intention of the obstacle vehicle in the current time period with the driving intention of the obstacle vehicle in the historical time period, it is determined whether the driving intention of the obstacle vehicle in the current time period has changed.
  • the autonomous driving vehicle can compromise with the obstacle vehicle, and plan the driving route of the autonomous driving vehicle while ensuring that the obstacle vehicle drives according to its driving intention, so that the obstacle vehicle and the autonomous driving vehicle can move in cooperation to ensure driving safety. Therefore, the embodiment of the present application will first determine the target driving route of the obstacle vehicle according to the driving intention of the obstacle vehicle, so that the obstacle vehicle moves according to its target driving route and ensures that the obstacle vehicle drives according to its driving intention.
  • the autonomous driving vehicle determines the target driving route of the autonomous driving vehicle based on the target driving route of the obstacle vehicle, and the target driving route of the autonomous driving vehicle and the target driving route of the obstacle vehicle need to meet traffic rules to ensure that the autonomous driving vehicle and the obstacle vehicle can drive safely.
  • step 204 includes: in response to the obstacle vehicle's driving intention not changing, obtaining the distance between the autonomous driving vehicle and the obstacle vehicle; if the distance between the autonomous driving vehicle and the obstacle vehicle is less than a distance threshold, controlling the autonomous driving vehicle to stop driving.
  • a second joint route can be planned according to the driving intention of the obstacle vehicle.
  • the second joint route includes the trial route of the autonomous driving vehicle in the next time period and the expected driving trajectory of the obstacle vehicle in the next time period, so that the autonomous driving vehicle can drive according to the trial route in the next time period.
  • step 204 includes: performing autonomous driving decision planning for the autonomous driving vehicle based at least on the driving intention of the obstacle vehicle and the second combined route with the highest recommended index.
  • the first combined route is the second combined route with the highest recommendation index.
  • the second combined route with the highest recommendation index includes the trial route of the autonomous driving vehicle and a second candidate route of the obstacle vehicle.
  • the autonomous driving vehicle drives along its trial route while obtaining relevant information of the obstacle vehicle.
  • the trial route of the autonomous driving vehicle is the expected driving trajectory of the autonomous driving vehicle and also the actual driving trajectory of the autonomous driving vehicle.
  • the second candidate route of the vehicle is the expected driving trajectory of the obstacle vehicle, and the relevant information of the obstacle vehicle is the actual driving trajectory of the obstacle vehicle.
  • the autonomous driving vehicle can make autonomous driving decision planning for the autonomous driving vehicle according to the driving intention of the obstacle vehicle and the second combined route with the highest recommended index to plan the second combined route, which can refer to the determination method of the first combined route and will not be repeated here.
  • the autonomous driving vehicle performs autonomous driving decision planning for the autonomous driving vehicle at least based on the driving intention of the obstacle vehicle and the first combined route with the highest recommended index.
  • the first combined route with the highest recommended index includes the first candidate route of the obstacle vehicle and the first candidate route of the autonomous driving vehicle, the first candidate route of the obstacle vehicle is the expected driving trajectory of the obstacle vehicle, and the first candidate route of the autonomous driving vehicle is the trial route of the autonomous driving vehicle, which is also the actual driving trajectory of the autonomous driving vehicle.
  • the autonomous driving vehicle determines the first joint route corresponding to the current time period in the previous time period of the current time period.
  • the autonomous driving vehicle is controlled to move according to the trial route in the first joint route, obtain the actual driving trajectory of the obstacle vehicle, and determine the second joint route corresponding to the next time period of the current time period based on the expected driving trajectory of the obstacle vehicle in the first joint route and the actual driving trajectory of the obstacle vehicle.
  • the contents executed based on the first joint route in the current time period are repeatedly executed based on the second joint route.
  • FIG. 7 is a schematic diagram of the framework of an autonomous driving decision-making planning method provided in an embodiment of the present application.
  • the framework includes forward imitation and online estimation, wherein forward imitation is used to generate a joint route.
  • the joint route includes the expected driving trajectory of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle, which can be collectively referred to as the expected driving trajectory.
  • the expected driving trajectory corresponding to the current time period is the first joint route mentioned above
  • the expected driving trajectory corresponding to the next time period is the second joint route mentioned above.
  • the autonomous driving vehicle may generate a first joint route in the previous time period of the current time period so that the autonomous driving vehicle moves according to the expected driving trajectory (i.e., the trial route) of the autonomous driving vehicle in the first joint route in the current time period.
  • the expected driving trajectory i.e., the trial route
  • the autonomous driving vehicle can obtain the expected driving trajectory corresponding to the previous time period (i.e. the historical expected driving trajectory of the obstacle vehicle and the historical expected driving trajectory of the autonomous driving vehicle mentioned above), and on the other hand, the autonomous driving vehicle can observe the actual driving trajectory corresponding to the previous time period (i.e. the historical actual driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the autonomous driving vehicle mentioned above). Based on the expected driving trajectory and the actual driving trajectory, online estimation is performed to obtain the perfect Bayesian equilibrium error (i.e. the historical deviation information mentioned above).
  • the autonomous driving vehicle moves according to the historical expected driving trajectory of the autonomous driving vehicle, the difference between the historical expected driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the autonomous driving vehicle is small and can be ignored.
  • the historical expected driving trajectory of the autonomous driving vehicle is the historical actual driving trajectory of the autonomous driving vehicle.
  • the trajectory point distribution information of the obstacle vehicle and the trajectory point distribution information of the autonomous driving vehicle are calculated by the trajectory point distribution estimator, and the parameter distribution information of the reward function (i.e., the recommended indicator function mentioned above) is estimated by the reward parameter estimator.
  • multiple trajectory points of the obstacle vehicle can be generated based on the trajectory point distribution information of the obstacle vehicle, and a candidate route for the obstacle vehicle can be obtained by sampling the target trajectory points of the multiple trajectory points.
  • a candidate route for the autonomous driving vehicle can be obtained.
  • the reward function estimator generates multiple candidate parameters of the reward function based on the parameter distribution information of the reward function, and obtains the target parameters of the reward function by sampling from the multiple candidate parameters.
  • the recommended indicators of each combined route are determined based on the target parameters of the reward function, and the combined route with the highest recommended indicator is selected as the joint route corresponding to the current time period.
  • the expected driving trajectory of the autonomous driving vehicle can be determined based on the perfect Bayesian equilibrium error, the trajectory point distribution information of the obstacle vehicle can be calculated by the trajectory point distribution estimator, and the parameter distribution information of the reward function (i.e. the recommended indicator function mentioned above) can be estimated by the reward parameter estimator.
  • multiple trajectory points of the obstacle vehicle can be generated based on the trajectory point distribution information of the obstacle vehicle, and a candidate route for the obstacle vehicle can be obtained by sampling the target trajectory points of the multiple trajectory points.
  • a combined route generation (corresponding to the second combined route mentioned above) can be achieved.
  • the reward function estimator generates multiple candidate parameters of the reward function based on the parameter distribution information of the reward function, and obtains the target parameters of the reward function by sampling from the multiple candidate parameters.
  • the recommended indicators of each combined route are determined based on the target parameters of the reward function, and the combined route with the highest recommended indicator is selected as the current time cycle.
  • the corresponding joint route for the period is
  • the embodiment of the present application realizes that in the previous time period, based on the expected driving trajectory corresponding to the previous time period and the actual driving trajectory corresponding to the previous time period, the expected driving trajectory corresponding to the current time period is determined. Then, the autonomous driving vehicle moves in the current time period according to the expected driving trajectory of the autonomous driving vehicle corresponding to the current time period, and at the same time, observes the actual driving trajectories of the autonomous driving vehicle and the obstacle vehicle corresponding to the current time period.
  • the expected driving trajectory corresponding to the current time period and the actual driving trajectory corresponding to the current time period can be obtained, and in the current time period, based on the expected driving trajectory corresponding to the current time period and the actual driving trajectory corresponding to the current time period, the expected driving trajectory corresponding to the next time period (i.e., the second joint route) is determined.
  • Figure 8 is a schematic diagram of an autonomous driving decision-making plan provided by an embodiment of the present application.
  • the road includes obstacle vehicles A to C and an autonomous driving vehicle D.
  • the embodiment of the present application can plan that the expected driving trajectory of the autonomous driving vehicle is to drive on the left side of the road close to itself (as shown by the dotted line).
  • the autonomous driving vehicle tentatively moves according to the expected driving trajectory of the autonomous driving vehicle within a time period, and tracks the actual driving trajectory of obstacle vehicles A to C in real time.
  • the expected driving trajectory of the autonomous driving vehicle D in the next time period is determined based on the actual driving trajectory of obstacle vehicles A to C.
  • the autonomous vehicle D tentatively turns left and observes the reactions of the bicycles (i.e., obstacle vehicles A to C). Based on the reactions of the bicycles, it determines whether it is safe to continue turning left. If it is safe, it continues to turn left, thereby achieving safe driving.
  • the bicycles i.e., obstacle vehicles A to C.
  • the automatic driving decision-making planning method provided in the embodiment of the present application can be applied to any traffic scene, for example, it can be applied to narrow road scenes, scenes where the automatic driving vehicle identifies that the driving intention of the obstacle vehicle is to drive in the middle of the road, etc.
  • the narrow road scene is that the drivable width of the road is less than the width threshold, for example, the road is a secondary road or there are many vehicles parked on both sides of the road.
  • the obstacle vehicle will choose to drive on the left or right side of its own road, and for the obstacle vehicle driving in the middle of the road, the automatic driving vehicle can identify that the driving intention of the obstacle vehicle is to drive in the middle of the road.
  • the automatic driving vehicle A and the obstacle vehicle B are both driving towards each other in the middle of the road, and the movement trajectory of the obstacle vehicle B is approximately straight.
  • the automatic driving vehicle A can recognize that the obstacle vehicle B is driving in the middle of the road and driving towards the automatic driving vehicle A, but the automatic driving vehicle A cannot determine whether the obstacle vehicle B wants to drive on the left side of its own road (i.e., B-m1) or on the right side of its own road (i.e., B-m2).
  • the autonomous driving vehicle When the autonomous driving vehicle identifies the obstacle vehicle's driving intention as driving in the middle of the road, the autonomous driving vehicle will jointly plan the expected driving trajectory of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle, find the best cooperation strategy for all subjects, and the autonomous driving vehicle will continue to move according to the expected driving trajectory of the autonomous driving vehicle within a time period.
  • the autonomous driving vehicle A will jointly plan the expected trajectory of the autonomous driving vehicle A as A-m2 and the expected driving trajectory of the obstacle vehicle B as B-m2, and the autonomous driving vehicle will continue to move according to A-m2 within a time period to induce the obstacle vehicle B to move close to B-m2.
  • the autonomous driving vehicle After the time period ends, if the actual driving trajectory of the obstacle vehicle is consistent with the expected driving trajectory of the obstacle vehicle, the autonomous driving vehicle will cooperate to complete the meeting based on the captured driving intention of the obstacle vehicle. If the actual driving trajectory of the obstacle vehicle is inconsistent with the expected driving trajectory of the obstacle vehicle, the autonomous driving vehicle needs to jointly plan the expected driving trajectory of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle again to ensure the driving safety of the autonomous driving vehicle. For example, in Figure 3, the obstacle vehicle B is still moving straight, and the distance between the autonomous driving vehicle and the obstacle vehicle B is relatively close. Then the autonomous driving vehicle can jointly plan that the expected driving trajectory of the autonomous driving vehicle is a stationary trajectory, while the expected trajectory of the obstacle vehicle is a trajectory that drives on its left side.
  • the autonomous driving vehicle will continuously move according to the expected driving trajectory of the autonomous driving vehicle within a time period. After the time period ends, the expected driving trajectory of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle will be jointly planned again according to the actual driving trajectory of the obstacle vehicle.
  • This control method will improve the planning efficiency of the joint route.
  • the feasible trajectory of the obstacle vehicle is reduced by half from the perspective of the autonomous driving vehicle; when the obstacle vehicle shows the intention of meeting the vehicle at the time level and the intention of meeting the vehicle at the space level at the same time, the feasible trajectory of the obstacle vehicle is reduced by 3/4.
  • the autonomous driving vehicle continuously moves according to the expected driving trajectory of the autonomous driving vehicle within a time period to guide the obstacle vehicle to show the driving intention as soon as possible, so as to reduce the feasible trajectory of the obstacle vehicle and increase the feasible trajectory of the autonomous driving vehicle.
  • the expected driving trajectory of the autonomous driving vehicle and the expected driving trajectory of the obstacle vehicle are jointly planned again according to the actual driving trajectory of the obstacle vehicle. This can not only improve the efficiency of joint planning and ensure real-time performance, but also increase the probability of the autonomous driving vehicle executing an efficient trajectory and improve the driving efficiency of the autonomous driving vehicle.
  • Data including but not limited to data for analysis, stored data, displayed data, etc.
  • signals are all authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data must comply with relevant laws, regulations and standards of relevant countries and regions.
  • relevant laws, regulations and standards of relevant countries and regions For example, the actual driving trajectory and expected driving trajectory involved in this application are all obtained with full authorization.
  • the autonomous driving vehicle when there is an obstacle vehicle in the environment where the autonomous driving vehicle is located, the autonomous driving vehicle is controlled to travel along the trial route, and the obstacle vehicle is guided to move by actively displaying the driving intention of the autonomous driving vehicle, so that the obstacle vehicle can display the driving intention of the obstacle vehicle as soon as possible.
  • the relevant information of the obstacle vehicle is obtained, and the driving intention of the obstacle vehicle is determined through the relevant information of the obstacle vehicle, so that the autonomous driving vehicle can capture the driving intention of the obstacle vehicle in advance.
  • the autonomous driving decision-making planning is carried out for the autonomous driving vehicle according to the driving intention of the obstacle vehicle, it not only improves the intelligence level of the autonomous driving vehicle, but also helps to improve the driving safety of the autonomous driving vehicle.
  • FIG9 is a schematic diagram of the structure of an automatic driving decision-making and planning device provided in an embodiment of the present application. As shown in FIG9 , the device includes:
  • the control module 901 is used to control the autonomous driving vehicle to drive along a trial route in response to the presence of an obstacle vehicle in the environment where the autonomous driving vehicle is located.
  • the obstacle vehicle refers to a vehicle that conflicts with the driving route of the autonomous driving vehicle, that is, the driving route of the obstacle vehicle conflicts with the driving route of the autonomous driving vehicle.
  • An acquisition module 902 is used to acquire relevant information of an obstacle vehicle while the autonomous driving vehicle is driving along the trial route;
  • a determination module 903 is used to determine the driving intention of the obstacle vehicle according to the relevant information of the obstacle vehicle;
  • the planning module 904 is used to make an autonomous driving decision plan for the autonomous driving vehicle at least according to the driving intention of the obstacle vehicle.
  • the device further includes:
  • the acquisition module 902 is further used to acquire the historical actual driving trajectory of the autonomous driving vehicle, the historical actual driving trajectory of the obstacle vehicle, and the historical expected driving trajectory of the obstacle vehicle.
  • the historical expected driving trajectory of the obstacle vehicle is estimated based on the driving route of the obstacle vehicle. For example, the historical expected driving trajectory of the obstacle vehicle is estimated by the autonomous driving vehicle based on the driving route of the obstacle vehicle.
  • the determination module 903 is further used to determine the historical deviation information based on the historical expected driving trajectory of the obstacle vehicle and the historical actual driving trajectory of the obstacle vehicle;
  • the determination module 903 is also used to determine a trial route based on the historical actual driving trajectory and historical deviation information of the autonomous driving vehicle.
  • the determination module 903 is used to determine, in response to the historical deviation information being less than a first threshold, at least one first candidate route of the obstacle vehicle and at least one first candidate route of the autonomous driving vehicle based on the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle; combine at least one first candidate route of the obstacle vehicle and at least one first candidate route of the autonomous driving vehicle to obtain at least one first combined route, any first combined route including a first candidate route of the obstacle vehicle and a first candidate route of the autonomous driving vehicle; determine the recommended index of each first combined route; select the first combined route with the highest recommended index from the at least one first combined route, and use the first candidate route of the autonomous driving vehicle included in the first combined route with the highest recommended index as a trial route.
  • the determination module 903 is used to determine the trajectory point distribution information of the obstacle vehicle and the trajectory point distribution information of the autonomous driving vehicle based on the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle; for a target subject, generate multiple trajectory points of the target subject based on the trajectory point distribution information of the target subject, where the target subject is the obstacle vehicle or the autonomous driving vehicle; sample multiple target trajectory points of the target subject from the multiple trajectory points of the target subject; and generate at least one first candidate route of the target subject based on the multiple target trajectory points of the target subject. For example, one or more first candidate routes of the target subject are generated.
  • the determination module 903 is used to determine the parameter distribution information of the recommended index function based on the historical actual driving trajectory of the obstacle vehicle, and the recommended index function is used to determine the recommended index of the first combined route; based on the parameter distribution information of the recommended index function, generate multiple candidate parameters of the recommended index function; sample the target parameters of the recommended index function from the multiple candidate parameters of the recommended index function; and determine the recommended index of each first combined route based on the target parameters of the recommended index function.
  • the determination module 903 is used to obtain at least one reference information of any first combined route for any first combined route, where any reference information is any one of comfort, safety, speed of the autonomous driving vehicle, uncertainty, politeness, and circulation, where comfort is used to describe acceleration, safety is used to describe collision information, uncertainty is used to describe the concentration of trajectory points (the trajectory points are trajectory points of the target subject, and the target subject is the obstacle vehicle or the autonomous driving vehicle), politeness is used to describe the impact of the autonomous driving vehicle on the movement of the obstacle vehicle, and circulation is used to describe the average speed of vehicles in the environment where the autonomous driving vehicle is located; based on each of the first combined routes, The reference information and the target parameter of the recommendation index function corresponding to each reference information (also referred to as each reference information) determine the recommendation index of any first combination route.
  • comfort is used to describe acceleration
  • safety is used to describe collision information
  • uncertainty is used to describe the concentration of trajectory points (the trajectory points are trajectory points of the target subject, and the target subject is the obstacle vehicle or the autonomous driving vehicle)
  • the determination module 903 is used to obtain at least one mapping relationship in response to the historical deviation information being not less than a first threshold value, any one of which is used to describe the mapping relationship between a driving trajectory set and a reference route, the driving trajectory set including at least one driving trajectory; select a target mapping relationship from at least one mapping relationship that matches the driving trajectory set with the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle (or, select a target mapping relationship from at least one mapping relationship, the driving trajectory set corresponding to the target mapping relationship includes at least one driving trajectory that matches the historical actual driving trajectory of the autonomous driving vehicle and the historical actual driving trajectory of the obstacle vehicle); and determine the reference route corresponding to the target mapping relationship as a trial route.
  • the determination module 903 is further configured to determine at least one second candidate route of the obstacle vehicle based on the trial route and the historical actual driving trajectory of the obstacle vehicle; combine any second candidate route of the obstacle vehicle with the trial route to obtain any second combined route; determine a recommended index for each second combined route, and select a second combined route with the highest recommended index from each second combined route;
  • the planning module 904 is used to perform autonomous driving decision planning for the autonomous driving vehicle based on at least the driving intention of the obstacle vehicle and the second combined route with the highest recommended index.
  • the determination module 903 is used to determine, for any obstacle vehicle, deviation information between a historical expected driving trajectory of any obstacle vehicle and a historical actual driving trajectory of any obstacle vehicle; and determine historical deviation information based on the deviation information between the historical expected driving trajectory of each obstacle vehicle and the historical actual driving trajectory of each obstacle vehicle.
  • the historical expected driving trajectory of any obstacle vehicle includes expected trajectory points at multiple moments, and the historical actual driving trajectory of any obstacle vehicle includes actual trajectory points at multiple moments;
  • the determination module 903 is used to determine, for any moment, the distance between the expected trajectory point and the actual trajectory point corresponding to any moment based on the position information of the expected trajectory point at any moment and the position information of the actual trajectory point at any moment; and determine the deviation information between the historical expected driving trajectory of any obstacle vehicle and the historical actual driving trajectory of any obstacle vehicle based on the distance between the expected trajectory point and the actual trajectory point corresponding to each moment.
  • the determination module 903 is used to determine the intention of the obstacle vehicle in the time dimension according to the relevant information of the obstacle vehicle; determine the intention of the obstacle vehicle in the space dimension according to the relevant information of the obstacle vehicle; and determine the intention of the obstacle vehicle in the time dimension and the intention of the obstacle vehicle in the space dimension as the driving intention of the obstacle vehicle.
  • the planning module 904 is used to determine a target driving route of the obstacle vehicle at least according to the driving intention of the obstacle vehicle in response to a change in the driving intention of the obstacle vehicle; and determine a target driving route of the autonomous driving vehicle based on the target driving route of the obstacle vehicle.
  • the planning module 904 is used to obtain the distance between the autonomous driving vehicle and the obstacle vehicle in response to the obstacle vehicle's driving intention not changing; if the distance between the autonomous driving vehicle and the obstacle vehicle is less than a distance threshold, control the autonomous driving vehicle to stop driving.
  • the autonomous driving vehicle when there is an obstacle vehicle in the environment where the autonomous driving vehicle is located, the autonomous driving vehicle is controlled to travel along the trial route, and the obstacle vehicle is guided to move by actively displaying the driving intention of the autonomous driving vehicle, so that the obstacle vehicle can display the driving intention of the obstacle vehicle as soon as possible.
  • the relevant information of the obstacle vehicle is obtained, and the driving intention of the obstacle vehicle is determined through the relevant information of the obstacle vehicle, so that the autonomous driving vehicle can capture the driving intention of the obstacle vehicle in advance.
  • the autonomous driving decision-making planning is carried out for the autonomous driving vehicle according to the driving intention of the obstacle vehicle, it not only improves the intelligence level of the autonomous driving vehicle, but also helps to improve the driving safety of the autonomous driving vehicle.
  • the device provided in FIG. 9 above only uses the division of the above functional modules as an example to illustrate when implementing its functions.
  • the above functions can be assigned to different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and method embodiments provided in the above embodiments belong to the same concept, and their specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • FIG10 shows a block diagram of a terminal device 1000 provided by an exemplary embodiment of the present application.
  • the terminal device 1000 includes: a processor 1001 and a memory 1002 .
  • Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • Processor 1001 may be implemented in at least one of the following hardware forms: DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array).
  • Processor 1001 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in an awake state, also known as a CPU (Central Processing Unit); the coprocessor is a low-power processor for processing data in a standby state.
  • the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content to be displayed on the display screen.
  • the processor 1001 may also include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 1002 may include one or more computer-readable storage media, which may be non-transitory. Non-transitory computer-readable storage media may also be referred to as non-transitory computer-readable storage media.
  • the memory 1002 may also include a high-speed random access memory, and a non-volatile memory, such as one or more disk storage devices, flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1002 is used to store at least one computer program, which is used to be executed by the processor 1001 to implement the autonomous driving decision planning method provided in the method embodiment of the present application.
  • the terminal device 1000 may further optionally include: a peripheral device interface 1003 and at least one peripheral device.
  • the processor 1001, the memory 1002 and the peripheral device interface 1003 may be connected via a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 1003 via a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1004, a display screen 1005, a camera assembly 1006, an audio circuit 1007 and a power supply 1008.
  • the peripheral device interface 1003 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1001 and the memory 1002.
  • the processor 1001, the memory 1002, and the peripheral device interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral device interface 1003 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals.
  • the radio frequency circuit 1004 communicates with the communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1004 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 1004 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and the like.
  • the radio frequency circuit 1004 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes, but is not limited to: the World Wide Web, a metropolitan area network, an intranet, various generations of mobile communication networks (2G, 3G, 4G and 5G), a wireless local area network and/or a WiFi (Wireless Fidelity) network.
  • the radio frequency circuit 1004 may also include circuits related to NFC (Near Field Communication), which is not limited in this application.
  • the display screen 1005 is used to display a UI (User Interface).
  • the UI may include graphics, text, icons, videos, and any combination thereof.
  • the display screen 1005 also has the ability to collect touch signals on the surface or above the surface of the display screen 1005.
  • the touch signal can be input as a control signal to the processor 1001 for processing.
  • the display screen 1005 can also be used to provide virtual buttons and/or virtual keyboards, also known as soft buttons and/or soft keyboards.
  • the display screen 1005 can be one, which is set on the front panel of the terminal device 1000; in other embodiments, the display screen 1005 can be at least two, which are respectively set on different surfaces of the terminal device 1000 or are folded; in other embodiments, the display screen 1005 can be a flexible display screen, which is set on the curved surface or folded surface of the terminal device 1000. Even, the display screen 1005 can also be set to a non-rectangular irregular shape, that is, a special-shaped screen.
  • the display screen 1005 can be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
  • the camera assembly 1006 is used to capture images or videos.
  • the camera assembly 1006 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • there are at least two rear cameras which are any one of a main camera, a depth of field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth of field camera to realize the background blur function, the fusion of the main camera and the wide-angle camera to realize panoramic shooting and VR (Virtual Reality) shooting function or other fusion shooting functions.
  • the camera assembly 1006 may also include a flash.
  • the audio circuit 1007 may include a microphone and a speaker.
  • the microphone is used to collect sound waves from the user and the environment, and convert the sound waves into electrical signals and input them into the processor 1001 for processing, or input them into the RF circuit 1004 to achieve voice communication.
  • the microphone may also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signals from the processor 1001 or the RF circuit 1004 into sound waves.
  • the audio circuit 1007 may also include a headphone jack.
  • the power supply 1008 is used to supply power to various components in the terminal device 1000.
  • the power supply 1008 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • the terminal device 1000 further includes one or more sensors 1009 .
  • the one or more sensors 1009 include, but are not limited to: an acceleration sensor 1011 , a gyroscope sensor 1012 , a pressure sensor 1013 , an optical sensor 1014 , and a proximity sensor 1015 .
  • the acceleration sensor 1011 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal device 1000.
  • the acceleration sensor 1011 can be used to detect the components of gravity acceleration on the three coordinate axes.
  • the processor 1001 can control the display screen 1005 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 1011.
  • the acceleration sensor 1011 can also be used for collecting game or user motion data.
  • the gyro sensor 1012 can detect the body direction and rotation angle of the terminal device 1000, and the gyro sensor 1012 can cooperate with the acceleration sensor 1011 to collect the user's 3D actions on the terminal device 1000.
  • the processor 1001 can implement the following functions based on the data collected by the gyro sensor 1012: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1013 can be set on the side frame of the terminal device 1000 and/or the lower layer of the display screen 1005.
  • the processor 1001 performs left and right hand recognition or shortcut operations according to the holding signal collected by the pressure sensor 1013.
  • the processor 1001 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1005.
  • the operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the optical sensor 1014 is used to collect the ambient light intensity.
  • the processor 1001 can control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1014. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is reduced.
  • the processor 1001 can also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1014.
  • the proximity sensor 1015 also called a distance sensor, is usually arranged on the front panel of the terminal device 1000.
  • the proximity sensor 1015 is used to collect the distance between the user and the front of the terminal device 1000.
  • the processor 1001 controls the display screen 1005 to switch from the screen-on state to the screen-off state; when the proximity sensor 1015 detects that the distance between the user and the front of the terminal device 1000 is gradually increasing, the processor 1001 controls the display screen 1005 to switch from the screen-off state to the screen-on state.
  • FIG. 10 does not limit the terminal device 1000 and may include more or fewer components than shown in the figure, or combine certain components, or adopt a different component arrangement.
  • FIG11 is a schematic diagram of the structure of the server provided in the embodiment of the present application.
  • the server 1100 may have relatively large differences due to different configurations or performances, and may include one or more processors 1101 and one or more memories 1102, wherein the one or more memories 1102 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1101 to implement the automatic driving decision planning method provided by the above-mentioned various method embodiments.
  • the processor 1101 is a CPU.
  • the server 1100 may also have components such as a wired or wireless network interface, a keyboard, and an input and output interface for input and output.
  • the server 1100 may also include other components for implementing device functions, which will not be described in detail here.
  • a non-temporary computer-readable storage medium in which at least one computer program is stored.
  • the at least one computer program is loaded and executed by a processor to enable the autonomous driving vehicle to implement any of the above-mentioned autonomous driving decision planning methods.
  • the above-mentioned non-temporary computer-readable storage medium can be a read-only memory (ROM), a random access memory (RAM), a compact disc (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, etc.
  • ROM read-only memory
  • RAM random access memory
  • CD-ROM compact disc
  • magnetic tape a magnetic tape
  • floppy disk a magnetic tape
  • optical data storage device etc.
  • a computer program in which at least one computer instruction is stored, and the at least one computer instruction is loaded and executed by a processor to enable an autonomous driving vehicle to implement any of the above-mentioned autonomous driving decision planning methods.
  • a computer program or a computer program product is also provided, in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to enable an autonomous driving vehicle to implement any of the above-mentioned autonomous driving decision planning methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

Un procédé de planification de décision de conduite autonome : en réponse à l'existence d'un véhicule faisant obstacle dans l'environnement d'un véhicule autonome, commande le véhicule autonome pour qu'il se déplace selon un itinéraire heuristique ; dans le processus de conduite de véhicule autonome selon l'itinéraire heuristique, obtient des informations associées du véhicule faisant obstacle ; en fonction des informations associées du véhicule faisant obstacle, détermine une intention de conduite du véhicule faisant obstacle ; et au moins en fonction de l'intention de conduite du véhicule faisant obstacle, réalise une planification de décision de conduite autonome sur le véhicule autonome.
PCT/CN2023/086587 2022-10-24 2023-04-06 Planification de décision de conduite autonome et véhicule autonome WO2024087522A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211303615.2A CN117962917A (zh) 2022-10-24 2022-10-24 自动驾驶决策规划方法及自动驾驶车辆
CN202211303615.2 2022-10-24

Publications (1)

Publication Number Publication Date
WO2024087522A1 true WO2024087522A1 (fr) 2024-05-02

Family

ID=90829839

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/086587 WO2024087522A1 (fr) 2022-10-24 2023-04-06 Planification de décision de conduite autonome et véhicule autonome

Country Status (2)

Country Link
CN (1) CN117962917A (fr)
WO (1) WO2024087522A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113635912A (zh) * 2021-09-10 2021-11-12 阿波罗智能技术(北京)有限公司 车辆控制方法、装置、设备、存储介质及自动驾驶车辆
CN113715814A (zh) * 2021-09-02 2021-11-30 北京百度网讯科技有限公司 碰撞检测方法、装置、电子设备、介质及自动驾驶车辆
CN113799797A (zh) * 2021-07-27 2021-12-17 北京三快在线科技有限公司 轨迹规划方法、装置、存储介质及电子设备
CN114852103A (zh) * 2022-05-23 2022-08-05 北京小马睿行科技有限公司 一种确定车辆行驶策略的方法及装置、车辆
CN115214722A (zh) * 2022-08-15 2022-10-21 阿波罗智联(北京)科技有限公司 自动驾驶方法、装置、电子设备、存储介质及车辆

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113799797A (zh) * 2021-07-27 2021-12-17 北京三快在线科技有限公司 轨迹规划方法、装置、存储介质及电子设备
CN113715814A (zh) * 2021-09-02 2021-11-30 北京百度网讯科技有限公司 碰撞检测方法、装置、电子设备、介质及自动驾驶车辆
CN113635912A (zh) * 2021-09-10 2021-11-12 阿波罗智能技术(北京)有限公司 车辆控制方法、装置、设备、存储介质及自动驾驶车辆
CN114852103A (zh) * 2022-05-23 2022-08-05 北京小马睿行科技有限公司 一种确定车辆行驶策略的方法及装置、车辆
CN115214722A (zh) * 2022-08-15 2022-10-21 阿波罗智联(北京)科技有限公司 自动驾驶方法、装置、电子设备、存储介质及车辆

Also Published As

Publication number Publication date
CN117962917A (zh) 2024-05-03

Similar Documents

Publication Publication Date Title
CN112400150B (zh) 基于预测扫视着陆点的动态图形渲染
EP3491493B1 (fr) Commande gestuelle de véhicules autonomes
US11858148B2 (en) Robot and method for controlling the same
CN111566612A (zh) 基于姿势和视线的视觉数据采集系统
US10620720B2 (en) Input controller stabilization techniques for virtual reality systems
CN112307642B (zh) 数据处理方法、装置、系统、计算机设备及存储介质
CN103608741A (zh) 由移动机器人来追踪及跟随运动对象
WO2022226736A1 (fr) Procédé et appareil d'interaction multi-écran, dispositif terminal et véhicule
JP7419495B2 (ja) 投影方法および投影システム
KR102639904B1 (ko) 공항용 로봇 및 그의 동작 방법
US20190354178A1 (en) Artificial intelligence device capable of being controlled according to user action and method of operating the same
CN108627176A (zh) 屏幕亮度调节方法及相关产品
JP2022089774A (ja) 車両の運転者を監視する装置および方法
KR20210057358A (ko) 제스처 인식 방법 및 이를 수행하는 제스처 인식 장치
KR20190098102A (ko) 외부 기기를 제어하기 위한 인공 지능 장치
CN110614634B (zh) 控制方法、便携式终端和存储介质
CN112915541B (zh) 跳点搜索方法、装置、设备及存储介质
CN111038497B (zh) 自动驾驶控制方法、装置、车载终端及可读存储介质
WO2024087522A1 (fr) Planification de décision de conduite autonome et véhicule autonome
CN113110487A (zh) 车辆仿真控制方法、装置、电子设备及存储介质
US11170539B2 (en) Information processing device and information processing method
JP7280888B2 (ja) 電子機器確定方法、システム、コンピュータシステムおよび読取り可能な記憶媒体
CN113734199B (zh) 车辆控制方法、装置、终端及存储介质
CN113734167B (zh) 车辆控制方法、装置、终端及存储介质
WO2024087456A1 (fr) Détermination d'informations d'orientation et véhicule autonome

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23881119

Country of ref document: EP

Kind code of ref document: A1