US11987269B2 - Safe non-conservative planning for autonomous vehicles - Google Patents
Safe non-conservative planning for autonomous vehicles Download PDFInfo
- Publication number
- US11987269B2 US11987269B2 US17/514,197 US202117514197A US11987269B2 US 11987269 B2 US11987269 B2 US 11987269B2 US 202117514197 A US202117514197 A US 202117514197A US 11987269 B2 US11987269 B2 US 11987269B2
- Authority
- US
- United States
- Prior art keywords
- risk
- objective
- budget
- autonomous vehicle
- planned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000009471 action Effects 0.000 claims abstract description 99
- 238000000034 method Methods 0.000 claims abstract description 46
- 230000003247 decreasing effect Effects 0.000 claims abstract description 15
- 230000000246 remedial effect Effects 0.000 claims description 19
- 238000011156 evaluation Methods 0.000 claims description 10
- 230000001965 increasing effect Effects 0.000 claims description 8
- 239000003795 chemical substances by application Substances 0.000 description 28
- 238000003860 storage Methods 0.000 description 22
- 238000004891 communication Methods 0.000 description 21
- 230000006870 function Effects 0.000 description 18
- 230000007246 mechanism Effects 0.000 description 14
- 230000033001 locomotion Effects 0.000 description 13
- 238000013459 approach Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 230000001667 episodic effect Effects 0.000 description 10
- 238000005096 rolling process Methods 0.000 description 9
- 230000001133 acceleration Effects 0.000 description 8
- 230000006399 behavior Effects 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000009826 distribution Methods 0.000 description 6
- 238000005538 encapsulation Methods 0.000 description 6
- 230000003068 static effect Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000007704 transition Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 4
- 230000000977 initiatory effect Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 3
- 230000006698 induction Effects 0.000 description 3
- 230000001939 inductive effect Effects 0.000 description 3
- 238000002955 isolation Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 229910052802 copper Inorganic materials 0.000 description 2
- 239000010949 copper Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000001976 improved effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000005641 tunneling Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0015—Planning or execution of driving tasks specially adapted for safety
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/18—Propelling the vehicle
- B60W30/18009—Propelling the vehicle related to particular drive situations
- B60W30/18145—Cornering
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/18—Propelling the vehicle
- B60W30/18009—Propelling the vehicle related to particular drive situations
- B60W30/18154—Approaching an intersection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0011—Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3453—Special cost functions, i.e. other than distance or default speed limit of road segments
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2754/00—Output or target parameters relating to objects
- B60W2754/10—Spatial relation or speed relative to objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/343—Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
Definitions
- An autonomous agent is a set of hardware and/or software configured to control a physical mechanism.
- a device under control of an autonomous agent may be referred to as a “robot,” an “autonomous vehicle,” and/or another such term.
- a vehicle e.g., automobile, aircraft, or water vehicle
- an autonomous agent may include an autonomous agent that controls steering, braking, acceleration, and/or some other physical mechanism of the vehicle, allowing the vehicle to be wholly or partially self-driving.
- An autonomous agent receives information about the physical environment from one or more sensors and uses the information to help determine how to control the physical mechanism. For example, if data from a sensor indicates an obstruction in the path of a self-driving vehicle, an autonomous agent may instruct the vehicle to brake and/or turn.
- the present disclosure relates generally to planning for autonomous vehicles.
- one or more non-transitory computer-readable media store instructions that, when executed by one or more processors, cause the one or more processors to perform operations including: obtaining a first risk budget constraining a first plan for an autonomous vehicle to satisfy a first objective; based at least on the first risk budget and the first objective, planning a first planned trajectory of the autonomous vehicle toward the first objective, at least by: (a) determining a first risk cost associated with a first initial planned action of the first planned trajectory, (b) based at least on the first risk cost, determining whether the first planned trajectory is feasible or infeasible within the first risk budget, and (c) responsive to determining that the first planned trajectory is feasible within the first risk budget, executing the first initial planned action of the first planned trajectory; decreasing the first risk budget by the first risk cost associated with the first initial planned action, to obtain a first remaining risk budget; obtaining first state data corresponding to a state of the autonomous vehicle after executing the first initial planned action of the first planned trajectory; and based at least
- the operations may further include: incrementing the first remaining risk budget to cover an interval risk bound.
- Planning the second trajectory of the autonomous vehicle toward the first objective may include: (a) determining a second risk cost associated with a second initial planned action of the second planned trajectory, (b) based at least on the second risk cost, determining whether the second planned trajectory is feasible or infeasible within the first remaining risk budget, an (c) responsive to determining that the second planned trajectory is infeasible within the first remaining risk budget, executing a remedial action and refraining from executing the second initial planned action.
- the remedial action may include an emergency stop.
- the emergency stop may be designed to satisfy a passive safety constraint associated with the autonomous vehicle, by which the autonomous vehicle is stationary in the event of a collision.
- Determining whether the first planned trajectory is feasible or infeasible within the first risk budget may include determining whether or not, if the first initial planned action is executed, an emergency stop can be executed within the first remaining risk budget.
- Planning the second trajectory of the autonomous vehicle toward the first objective may include: (a) determining a second risk cost associated with a second initial planned action of the second planned trajectory, (b) based at least on the second risk cost, determining whether the second planned trajectory is feasible or infeasible within the first remaining risk budget, and (c) responsive to determining that the second planned trajectory is feasible within the first remaining risk budget, executing the second initial planned action of the second planned trajectory.
- the operations may further include: decreasing the first remaining risk budget by the second risk cost associated with the second initial planned action, to obtain a second remaining risk budget; obtaining second state data corresponding to a state of the autonomous vehicle after executing the second initial planned action of the second planned trajectory; and based at least on the second state data, the second remaining risk budget, and the first objective, planning a third trajectory of the autonomous vehicle toward the first objective.
- the operations may further include: iteratively reducing the first remaining risk budget by risk costs associated with executed actions, until the autonomous vehicle satisfies the first objective or is unable to proceed toward the first objective within the remaining risk budget.
- the operations may further include: determining a second risk budget constraining a second plan for the autonomous vehicle to satisfy a second objective; and upon satisfying the first objective, increasing the second risk budget by an unallocated amount of the first remaining risk budget.
- the operations may further include: computing a safety evaluation metric for the autonomous vehicle, based at least on the first objective and an amount of the first risk budget used toward satisfying the first objective.
- the operations may further include: based at least on the first remaining risk budget, generating a safety alert.
- the first objective may include a route plan
- executing the first initial planned action may include defining a waypoint in multiple waypoints of the route plan.
- the first objective may include a physical destination, and executing the first initial planned action may include applying a control to the autonomous vehicle.
- the first risk budget may constrain the first plan for the autonomous vehicle to a threshold probability of one or more kinds of safety violations over one or more of travel distance of the autonomous vehicle or travel time of the autonomous vehicle.
- a system includes one or more processors, and one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations including: obtaining a first risk budget constraining a first plan for an autonomous vehicle to satisfy a first objective; based at least on the first risk budget and the first objective, planning a first planned trajectory of the autonomous vehicle toward the first objective, at least by: (a) determining a first risk cost associated with a first initial planned action of the first planned trajectory, (b) based at least on the first risk cost, determining whether the first planned trajectory is feasible or infeasible within the first risk budget, and (c) responsive to determining that the first planned trajectory is feasible within the first risk budget, executing the first initial planned action of the first planned trajectory; decreasing the first risk budget by the first risk cost associated with the first initial planned action, to obtain a first remaining risk budget; obtaining first state data corresponding to a state of the autonomous vehicle after executing the first initial planned
- the operations may further include: incrementing the first remaining risk budget to cover an interval risk bound.
- Planning the second trajectory of the autonomous vehicle toward the first objective may include: (a) determining a second risk cost associated with a second initial planned action of the second planned trajectory, (b) based at least on the second risk cost, determining whether the second planned trajectory is feasible or infeasible within the first remaining risk budget, an (c) responsive to determining that the second planned trajectory is infeasible within the first remaining risk budget, executing a remedial action and refraining from executing the second initial planned action.
- the remedial action may include an emergency stop.
- the emergency stop may be designed to satisfy a passive safety constraint associated with the autonomous vehicle, by which the autonomous vehicle is stationary in the event of a collision.
- Determining whether the first planned trajectory is feasible or infeasible within the first risk budget may include determining whether or not, if the first initial planned action is executed, an emergency stop can be executed within the first remaining risk budget.
- Planning the second trajectory of the autonomous vehicle toward the first objective may include: (a) determining a second risk cost associated with a second initial planned action of the second planned trajectory, (b) based at least on the second risk cost, determining whether the second planned trajectory is feasible or infeasible within the first remaining risk budget, and (c) responsive to determining that the second planned trajectory is feasible within the first remaining risk budget, executing the second initial planned action of the second planned trajectory.
- the operations may further include: decreasing the first remaining risk budget by the second risk cost associated with the second initial planned action, to obtain a second remaining risk budget; obtaining second state data corresponding to a state of the autonomous vehicle after executing the second initial planned action of the second planned trajectory; and based at least on the second state data, the second remaining risk budget, and the first objective, planning a third trajectory of the autonomous vehicle toward the first objective.
- the operations may further include: iteratively reducing the first remaining risk budget by risk costs associated with executed actions, until the autonomous vehicle satisfies the first objective or is unable to proceed toward the first objective within the remaining risk budget.
- the operations may further include: determining a second risk budget constraining a second plan for the autonomous vehicle to satisfy a second objective; and upon satisfying the first objective, increasing the second risk budget by an unallocated amount of the first remaining risk budget.
- the operations may further include: computing a safety evaluation metric for the autonomous vehicle, based at least on the first objective and an amount of the first risk budget used toward satisfying the first objective.
- the operations may further include: based at least on the first remaining risk budget, generating a safety alert.
- the first objective may include a route plan
- executing the first initial planned action may include defining a waypoint in multiple waypoints of the route plan.
- the first objective may include a physical destination, and executing the first initial planned action may include applying a control to the autonomous vehicle.
- the first risk budget may constrain the first plan for the autonomous vehicle to a threshold probability of one or more kinds of safety violations over one or more of travel distance of the autonomous vehicle or travel time of the autonomous vehicle.
- a method includes: obtaining a first risk budget constraining a first plan for an autonomous vehicle to satisfy a first objective; based at least on the first risk budget and the first objective, planning a first planned trajectory of the autonomous vehicle toward the first objective, at least by: (a) determining a first risk cost associated with a first initial planned action of the first planned trajectory, (b) based at least on the first risk cost, determining whether the first planned trajectory is feasible or infeasible within the first risk budget, and (c) responsive to determining that the first planned trajectory is feasible within the first risk budget, executing the first initial planned action of the first planned trajectory; decreasing the first risk budget by the first risk cost associated with the first initial planned action, to obtain a first remaining risk budget; obtaining first state data corresponding to a state of the autonomous vehicle after executing the first initial planned action of the first planned trajectory; and based at least on the first state data, the first remaining risk budget, and the first objective, planning a second trajectory of the autonomous vehicle toward the first objective.
- the method may further include: incrementing the first remaining risk budget to cover an interval risk bound.
- Planning the second trajectory of the autonomous vehicle toward the first objective may include: (a) determining a second risk cost associated with a second initial planned action of the second planned trajectory, (b) based at least on the second risk cost, determining whether the second planned trajectory is feasible or infeasible within the first remaining risk budget, an (c) responsive to determining that the second planned trajectory is infeasible within the first remaining risk budget, executing a remedial action and refraining from executing the second initial planned action.
- the remedial action may include an emergency stop.
- the emergency stop may be designed to satisfy a passive safety constraint associated with the autonomous vehicle, by which the autonomous vehicle is stationary in the event of a collision.
- Determining whether the first planned trajectory is feasible or infeasible within the first risk budget may include determining whether or not, if the first initial planned action is executed, an emergency stop can be executed within the first remaining risk budget.
- Planning the second trajectory of the autonomous vehicle toward the first objective may include: (a) determining a second risk cost associated with a second initial planned action of the second planned trajectory, (b) based at least on the second risk cost, determining whether the second planned trajectory is feasible or infeasible within the first remaining risk budget, and (c) responsive to determining that the second planned trajectory is feasible within the first remaining risk budget, executing the second initial planned action of the second planned trajectory.
- the operations may further include: decreasing the first remaining risk budget by the second risk cost associated with the second initial planned action, to obtain a second remaining risk budget; obtaining second state data corresponding to a state of the autonomous vehicle after executing the second initial planned action of the second planned trajectory; and based at least on the second state data, the second remaining risk budget, and the first objective, planning a third trajectory of the autonomous vehicle toward the first objective.
- the method may further include: iteratively reducing the first remaining risk budget by risk costs associated with executed actions, until the autonomous vehicle satisfies the first objective or is unable to proceed toward the first objective within the remaining risk budget.
- the method may further include: determining a second risk budget constraining a second plan for the autonomous vehicle to satisfy a second objective; and upon satisfying the first objective, increasing the second risk budget by an unallocated amount of the first remaining risk budget.
- the method may further include: computing a safety evaluation metric for the autonomous vehicle, based at least on the first objective and an amount of the first risk budget used toward satisfying the first objective.
- the method may further include: based at least on the first remaining risk budget, generating a safety alert.
- the first objective may include a route plan
- executing the first initial planned action may include defining a waypoint in multiple waypoints of the route plan.
- the first objective may include a physical destination, and executing the first initial planned action may include applying a control to the autonomous vehicle.
- the first risk budget may constrain the first plan for the autonomous vehicle to a threshold probability of one or more kinds of safety violations over one or more of travel distance of the autonomous vehicle or travel time of the autonomous vehicle.
- FIG. 1 is a block diagram of an example of a system according to an embodiment
- FIG. 2 is a flow diagram of an example of operations for safe non-conservative planning according to an embodiment
- FIG. 3 illustrates a racetrack counterexample according to an embodiment
- FIG. 4 illustrates an example of an algorithm according to an embodiment
- FIG. 5 illustrates an example according to an embodiment
- FIG. 6 illustrates results of the example of FIG. 5 according to an embodiment
- FIG. 7 illustrates an example according to an embodiment
- FIG. 8 illustrates results of the example of FIG. 7 according to an embodiment
- FIG. 9 is a block diagram of an example of a computer system according to an embodiment.
- FIG. 1 is a block diagram of an example of a system 100 according to an embodiment.
- the system 100 may include more or fewer components than the components illustrated in FIG. 1 .
- the components illustrated in FIG. 1 may be local to or remote from each other.
- the components illustrated in FIG. 1 may be implemented in software and/or hardware. Each component may be distributed over multiple applications and/or machines. Multiple components may be combined into one application and/or machine. Operations described with respect to one component may instead be performed by another component.
- the system 100 includes an autonomous vehicle 102 .
- the autonomous vehicle 102 may be a wholly autonomous vehicle configured to operate without any human guidance.
- the autonomous vehicle 102 may be a partially autonomous vehicle in which some aspects are automated and others remain under control of a human operator.
- autonomous vehicles include, but are not limited to: a self-driving vehicle designed to transport cargo and/or passengers (e.g., a self-driving tractor-trailer used to transport cargo over roads and/or within a cargo distribution facility, a passenger vehicle used to transport occupants over roads, which may also tow a payload coupled to the passenger vehicle such as a cargo trailer, boat trailer, or camper, etc.); an aircraft (e.g., a cargo or passenger aircraft, a drone, or another kind of aircraft); a watercraft; a spacecraft; and an automated home appliance (e.g., a robotic vacuum cleaner).
- a self-driving vehicle designed to transport cargo and/or passengers
- a self-driving tractor-trailer used to transport cargo over roads and/or within a cargo distribution facility
- a passenger vehicle used to transport occupants over roads, which may also tow a payload coupled to the passenger vehicle such as a cargo trailer, boat trailer, or camper, etc.
- an aircraft e
- the autonomous vehicle 102 includes one or more physical mechanism(s) 114 used to direct the autonomous vehicle 102 's trajectory (including direction, acceleration, and/or speed), such as a steering mechanism, accelerator, brake, etc.
- a physical mechanism 114 may include a controller (not shown) that translates digital and/or analog instructions to physical motion (e.g., physically turning the wheels, increasing or decreasing acceleration, engaging a brake mechanism, etc.).
- the autonomous vehicle 102 includes one or more autonomous agent(s) 104 configured to control the operation of one or more of the physical mechanism(s) 114 .
- the autonomous agent 104 is configured to receive information about the physical environment from one or more sensors 112 .
- the sensor(s) 112 may include a radar sensor, lidar sensor, camera (i.e., configured to capture still images and/or video), microphone, thermometer, altitude sensor, global positioning system (GPS), and/or another kind of sensor configured to gather information about the physical environment.
- GPS global positioning system
- Information gathered by a sensor 112 may relate to the geospatial location of the autonomous vehicle 102 , weather conditions, locations of static and/or mobile obstacles (e.g., other vehicles, pedestrians, terrain, overpasses, etc.), road markings, altitude, and/or other information relevant to the autonomous vehicle 102 's location and trajectory in the physical environment.
- the autonomous agent 104 may store information from the sensor(s) 112 (in raw form and/or subsequent to one or more data transformations) as state data 120 .
- the autonomous agent 104 is configured to plan routes and/or trajectories based in part on risk budget data 122 .
- a risk budget is a constraint on how much risk the autonomous vehicle 102 is allowed to take.
- the risk budget may define a threshold risk of collision that the autonomous vehicle 102 is not permitted to meet or exceed, over a driving time and/or driving distance of the autonomous vehicle 102 .
- the risk budget may define a threshold risk of safety violations (which may include collisions and/or other kinds of safety violations), safety failures, and/or one or more other risk factors.
- Risk budget data 122 may include one or more initial risk budgets for one or more routes and/or route segments. A total risk budget may be allocated over multiple routes and/or segments.
- the risk budget data 122 may include a remaining risk budget, i.e., an amount of risk budget that has not yet been consumed by actions executed while planning over the route or segment.
- the risk budget data 122 thus provides one or more predefined safety constraints that the autonomous agent 104 must adhere to.
- an autonomous agent 104 includes a route planner 108 .
- the route planner 108 is configured to plan a route for the autonomous vehicle 102 .
- Route data 124 may include, for example, a digital map, graph, and/or other data structure(s) describing route options, such as roads and/or other infrastructure, that the autonomous vehicle 102 may follow from a planned physical starting point to a planned physical destination. Examples of operations for planning a route are described in further detail below.
- the autonomous agent 104 includes a trajectory planner 110 .
- the trajectory planner 110 is configured to plan a trajectory for the autonomous vehicle 102 , i.e., a trajectory that the autonomous vehicle 102 follows along a planned route. Examples of operations for planning a trajectory are described in further detail below.
- the autonomous agent 104 is configured to control operation of the physical mechanism(s) 114 .
- the autonomous agent 104 may send a signal to a steering mechanism to adjust the autonomous vehicle 102 's direction, to an accelerator to increase or decrease acceleration, and/or to a braking mechanism to apply the brakes.
- the autonomous agent 104 may be configured to control operation of many different kinds of physical mechanisms 114 in many different ways.
- the autonomous agent 104 may be configured to store data (e.g., state data 120 , risk budget data 122 , and/or route data 124 ) in a data repository 118 .
- a data repository 118 is any type of storage unit and/or device (e.g., a file system, database, collection of tables, or any other storage mechanism) for storing data.
- a data repository 118 may include multiple different storage units and/or devices. The multiple different storage units and/or devices may or may not be of the same type or located at the same physical site. Further, a data repository 118 may be implemented or may execute on the same computing system as one or more other components of the system 100 .
- a data repository 118 may be implemented or executed on a computing system separate from one or more other components of the system 100 .
- a data repository 118 may be logically integrated with one or more other components of the system 100 .
- a data repository 118 may be communicatively coupled to one or more other components of the system 100 via a direct connection or via a network.
- a data repository 118 is illustrated as storing various kinds of information. Some or all of this information may be implemented and/or distributed across any of the components of the system 100 . However, this information is illustrated within the data repository 118 for purposes of clarity and explanation.
- a user interface 116 refers to hardware and/or software configured to facilitate communications between a user and the autonomous agent 104 , for example by presenting (e.g., as an image, audio, and/or video) a safety alert to the user.
- the user interface 116 may be located in the autonomous vehicle 102 or remote from the autonomous vehicle 102 (e.g., in a remote system configured to monitor operation of one or more autonomous vehicles, individually and/or collectively).
- a user interface 116 renders user interface elements and receives input via user interface elements.
- a user interface 116 may be a graphical user interface (GUI), a command line interface (CLI), a haptic interface, a voice command interface, and/or any other kind of interface or combination thereof. Examples of user interface elements include checkboxes, radio buttons, dropdown lists, list boxes, buttons, toggles, text fields, date and time selectors, command lines, sliders, pages, and forms.
- different components of a user interface 116 are specified in different languages.
- the behavior of user interface elements may be specified in a dynamic programming language, such as JavaScript.
- the content of user interface elements may be specified in a markup language, such as hypertext markup language (HTML), Extensible Markup Language (XML), or XML User Interface Language (XUL).
- the layout of user interface elements may be specified in a style sheet language, such as Cascading Style Sheets (CSS).
- aspects of a user interface 116 may be specified in one or more other languages, such as Java, Python, Perl, C, C++, and/or any other language or combination thereof.
- one or more components of the system 100 are implemented on one or more digital devices.
- the term “digital device” generally refers to any hardware device that includes a processor.
- a digital device may refer to a physical device executing an application or a virtual machine. Examples of digital devices include a computer, a tablet, a laptop, a desktop, a netbook, a server, a web server, a network policy server, a proxy server, a generic machine, a function-specific hardware device, a hardware router, a hardware switch, a hardware firewall, a hardware network address translator (NAT), a hardware load balancer, a mainframe, a television, a content receiver, a set-top box, a printer, a mobile handset, a smartphone, a personal digital assistant (“PDA”), a wireless receiver and/or transmitter, a base station, a communication management device, a router, a switch, a controller, an access point, and/or a client device.
- PDA personal digital assistant
- FIG. 2 is a flow diagram of an example of operations for safe non-conservative planning according to an embodiment.
- One or more operations illustrated in FIG. 2 may be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated in FIG. 2 should not be construed as limiting the scope of one or more embodiments.
- operations described herein may be used for route planning and/or trajectory planning.
- the “objective” may be a route plan; an initial risk budget may apply to the entire route plan; a planned trajectory may be a planned trajectory to a waypoint in the route; and executing an action may include defining the waypoint as part of the route.
- an “objective” may be autonomous travel to a waypoint; an initial risk budget may apply to a route segment that terminates at the waypoint; a planned trajectory may be a planned trajectory up to a predefined time horizon within the segment; and executing an action may include applying a control corresponding to the initial action in the planned trajectory.
- a system determines an initial risk budget (Operation 202 ).
- the initial risk budget may be predetermined, e.g., according to a predefined safety standard for the autonomous vehicle. Alternatively, the initial risk budget may be based on one or more dynamic factors. For example, the system may allocate more or less risk budget depending on how busy the environment is, the urgency of satisfying the objective, etc.
- the initial risk budget applies to an objective (e.g., an entire route plan or autonomous travel to a waypoint).
- the system determines a belief state (Operation 204 ).
- the belief state is based, at least in part, on state data from sensors of the autonomous vehicle.
- the belief state may further be based on computed assumptions about the behavior of other agents (e.g., other vehicles and/or people whose behaviors are not entirely predictable) in the environment.
- the system computes a planned trajectory (Operation 206 ).
- Computing the planned trajectory includes determining an approximately optimal (subject to constraints such as time and processing power) trajectory of the autonomous vehicle toward the objective.
- the system determines a risk cost associated with the initial action of the planned trajectory (Operation 208 ).
- the risk cost may be determined in many different ways. In general, in an embodiment, the risk cost is based on a computed probability of collision (e.g., with a vehicle, obstacle, pedestrian, etc.) and/or other kind of safety violation or failure.
- the system may determine the risk cost, for example, based on a probabilistic prediction of the autonomous vehicle's motion and the motion of other agents in the environment. Some examples may use constraints other than, or in addition to, collision risk to determine risk costs.
- the system determines whether the planned trajectory is feasible within the risk budget (Decision 210 ). That is, the system determines whether executing the planned trajectory can be performed without exceeding the initial risk budget. Determining whether the planned trajectory is feasible may include determining whether, after executing the initial action of the planned trajectory, the remaining risk budget would still allow for executing a remedial action if necessary.
- the system executes the initial action of the planned trajectory (Operation 212 ) and decreases the risk budget accordingly (Operation 214 ). Specifically, the system decreases the risk budget by the amount of risk “consumed” by executing the action. The remaining risk budget, after this decrease, is the amount of risk budget still left toward the objective. In addition, the system may add an increment of additional risk budget to the remaining risk budget in each iteration (Operation 215 ). The increment of risk budget added may be relatively small, as compared to the total risk budget. Incrementing the remaining risk budget in this manner replenishes the risk budget slowly over time and may serve to bound interval risk in addition to episodic risk.
- the system may generate an alert (Operation 217 ).
- the system may present the alert, for example, in a user interface as described above.
- the alert may inform a human operator of the autonomous vehicle that the autonomous vehicle requires human intervention.
- the alert may indicate that the autonomous vehicle needs to be taken out of operation before it exceeds its total allowable risk budget.
- the alert may indicate that the autonomous vehicle is approaching the end of its allowable risk budget.
- the system may determine that the autonomous vehicle is getting “stuck” with unacceptable frequency, another indication of prediction failure. In general, the system may determine that prediction failures exceed an acceptable threshold and, based on that determination, generate an alert.
- agents e.g., vehicles, pedestrians, etc.
- the system may determine that prediction failures exceed an acceptable threshold and, based on that determination, generate an alert.
- the system determines whether the objective has been satisfied (Decision 218 ). If the objective is not yet satisfied (Decision 218 ), then the system determines a new belief state (Operation 204 ) and performs operations as described above, but now using the remaining risk budget. This process repeats iteratively until the objective is satisfied or the system is required to take a remedial action.
- the system executes a remedial action (Operation 211 ).
- the system is designed to enforce a passive safety constraint as defined herein. According to passive safety principles, if the autonomous vehicle is already stopped, then executing the remedial action may simply involve refraining from taking any further action toward the objective. If the autonomous vehicle is in motion, then executing the remedial action may include executing a remedial action (e.g., an emergency stop) while also refraining from taking any further action toward the objective.
- a remedial action e.g., an emergency stop
- the system may proceed to determine whether the objective has nonetheless been satisfied (Decision 218 ) and proceed from there as described above. That is, if the objective has not yet been satisfied, the system may continue to operate toward the objective. For example, if the autonomous vehicle has executed an emergency stop while traveling toward a waypoint, the autonomous vehicle may resume travel, still subject to the remaining risk budget.
- the system may determine whether planning for another objective is needed (Decision 219 ). If planning for another objective is needed, then the system may repeat the process described above for the next objective (Operation 222 ). For example, if the objective includes autonomous travel to a waypoint in a route, the next objective may include autonomous travel to the subsequent waypoint in the same route. In some cases, if a particular amount of risk budget was allocated to the initial objective and some of that risk budget remains, some or all of the remaining risk budget may be added to the risk budget allocated for the subsequent objective. The system may thus have more risk budget available for the subsequent objective than if no risk budget had remained after satisfying the initial objective. “Rolling over” the risk budget from one objective to the next may allow for greater efficiency based on higher available risk budget. For example, when planning toward the subsequent objective, the autonomous vehicle may be able to travel faster, take more left turns, etc.
- a post-planning operation may include computing a safety evaluation metric for the vehicle, based on the objective and the amount of risk budget remaining when the objective is satisfied.
- the safety evaluation metric may be used to evaluate safety performance of the autonomous vehicle. If the safety evaluation metric is computed for multiple autonomous vehicles, the metrics may be compared with each other to provide objective information about the relative safety of the two vehicles.
- a safety evaluation metric may indicate that there is room in the risk budget to make the autonomous vehicle perform more efficiently, i.e., by planning even less conservatively (e.g., driving faster, taking more left turns, etc.) while still remaining within the allowable risk budget.
- planning safe, efficient robot motion e.g., autonomous or semi-autonomous driving
- other agents e.g., other vehicles, pedestrians, etc.
- planning optimal feedback policies subject to constraints bounding the probability of safety violation—is computationally intractable.
- the optimal solution must be approximated, introducing a tradeoff between safety and performance: systems that satisfy a very high safety threshold may behave too conservatively, while systems that are non-conservative enough to meet practical efficiency requirements may not provide the desired safety guarantees, requiring close human supervision.
- one or more embodiments described herein compute safe, approximately optimal plans that guarantee hard constraints on safety, while achieving efficient, non-conservative performance that satisfies realistic computational constraints.
- One or more embodiments include a criterion for probabilistically safe planning, referred to herein as episodic risk bounds (ERBs).
- ERBs provide a guarantee that the probability of safety violation for a planning algorithm is less than a given threshold, over the course of an entire episodic task within a dynamic, partially observable environment. This definition may satisfy, for example, the requirements of system designers who seek to provide statistical guarantees on the safety of robots performing sequences of tasks in dynamic environments (e.g., following a series of waypoints).
- open-loop approximations to the optimal feedback policy can satisfy an ERB, in practice their performance is too conservative for many applications.
- a feedback mechanism may be introduced using a receding horizon control (RHC) strategy.
- RHC receding horizon control
- One or more embodiments include an RHC algorithm that satisfies an ERB and can be shown to achieve non-conservative performance in several autonomous driving environments.
- the example algorithm described below maintains a risk budget, initialized with the ERB given as input. Given the risk budget, the algorithm solves a constrained optimization problem that minimizes an estimate of the cost of an open-loop plan up to a finite horizon, subject to constraints on the risk incurred by the plan, and on an upper bound on the risk of a contingency plan that reaches a safe set. At each step, if the optimized plan is feasible, the first action of the plan is executed, and the risk incurred by this action is subtracted from the risk budget, which helps satisfy the ERB. The algorithm then replans with the new risk budget.
- the contingency plan is executed immediately.
- the contingency plan ensures the recursive feasibility of constraint satisfaction—the planner will never exceed the risk budget, because the contingency plan guarantees the safe set will always be reachable before this occurs.
- the contingency plan also reduces conservatism by providing a low-latency closed-loop policy, allowing the longer-horizon, open-loop planning process to plan less suboptimal behaviors.
- the example algorithm described below is compatible with planners that accept probabilistic models of a robot and a dynamic environment, and that return a finite-horizon open-loop plan that satisfies a joint chance constraint.
- probabilistic dynamics and agent models may be assumed to be provided and validated independently of the example system described here.
- a safe set is defined in terms of passive safety: the robot is assumed to be safe if it is not in collision with any obstacles, or if it is stopped.
- an example framework is described herein that includes a dynamics model, agent prediction algorithm, and planning algorithm.
- Examples described below apply the framework in several environments to support the theoretical arguments and demonstrate the performance and practicality of this approach.
- a simplified interactive autonomous driving example shows empirically that this approach satisfies the desired risk bound, but without exceeding it to a large degree due to over-conservativeness.
- This approach is compared with several alternative approaches from the literature, which are shown to be either over-conservative or fail to satisfy the safety bound.
- Another example described below introduces more complexity to the autonomous driving scenario and increases the safety threshold to a realistically high level. As described below, the example algorithm still exhibits good performance, while several alternative algorithms become overly conservative.
- the examples described below further include a real-world demonstration of the example algorithm running on an autonomous class 8 truck.
- the system is partially observable, where y i ⁇ ⁇ n y is the observation at step i; and v i ⁇ ⁇ n v is the measurement noise and is sampled from a known probability distribution P v i .
- the state transition function ⁇ and the observation function h are both deterministic functions.
- the step-wise cost function l(x i , u i ) for each step i and the final cost function l ⁇ (x T ) may be defined and given based on the objective of the planning problem.
- the belief state is a sufficient statistic for the initial belief b 0 and the history of controls u 0:i ⁇ 1 and observations y 0:i .
- a belief state transition function b i+1 (x i+1 ) ⁇ b (b i , ⁇ i (b i ), y i+1 ) updates the belief after control ⁇ i (b i ) and observation y i+1 by applying Bayesian filtering, such that:
- b i + 1 ⁇ ( x i + 1 ) ⁇ ⁇ p ⁇ ( y i + 1
- the states may be constrained to be outside some collision zone coll ⁇ .
- c ⁇ ( b i , ⁇ i , ( b i ) ) ⁇ x i ⁇ X ⁇ l ⁇ ( x i , ⁇ i ⁇ ( b i ) ) ⁇ b i ⁇ ( x i ) ( 6 ) c ⁇ ( b T ) + ⁇ x T ⁇ X ⁇ l ⁇ ( x T ) ⁇ b T ⁇ ( x T ) . ( 7 )
- the joint chance constraint (5c) enables non-conservative behavior by allowing non-uniform allocation of risk across steps.
- the optimization in (5) is NP-hard.
- efficient approximations are included for real-time applications.
- One approach is to use an RHC strategy, recursively solving (5) over a much shorter horizon N than the original horizon T, executing the first control in the optimized control sequence, and updating the belief.
- the RHC policy is suboptimal, one or more embodiments preserve the original chance constraint (4) over the interval [0, T].
- this bound does not hold.
- UMDP unobservable Markov decision process
- PCL-RHC Partially Closed-Loop Receding Horizon Control
- a space of parametric bounds on the total risk incurred by chance-constrained RHC may be defined over an interval or episode.
- a proof may then be sketched by counterexample that the risk incurred over the interval [0, T] by the RHC method described above is not bounded by ⁇ when the joint chance-constrained optimization allocates ⁇ /T risk per step.
- the following discussion introduces an example of a safe and non-conservative planning algorithm, Risk-Budget RHC (RB-RHC), and proves a bound on the risk incurred as a function of the joint chance constraint used at each replanning step.
- episodic risk bounds constrain the risk incurred by a planner over arbitrary time periods or tasks. For example, system designers may wish to bound the expected number of safety violations per year (risk over an interval), or to bound the expected chance of safety violation while executing a particular task (risk over an episode).
- One or more embodiments use a definition that encompasses both interval and episodic risk bounds.
- an episodic risk bound is an upper bound on the risk incurred during the interval [0, T], such that:
- the parameter ⁇ is the amount of risk allocated to the risk budget at each timestep, capturing the concept that risk scales linearly with the length of the time interval (i.e., at a constant rate).
- the parameter ⁇ 0 captures a fixed amount of risk allocated over the course of an episode.
- the concept behind the counterexample is that at every iteration, joint chance-constrained RHC can allocate the maximum risk allowed by the chance constraint to the first step. This can lead the algorithm to incur risk that far exceeds the desired episodic risk bound.
- a racetrack has two consecutive sharp curves; the curved arrow 302 represents the path of a vehicle (not shown) around two curves 304 , 306 of the racetrack.
- the first sharp curve there is 10% chance of crashing if driven at 100 mph, and a 0% chance of crashing if driven at 70 mph; for the second curve, there is 10% chance of crashing if driven at 90 mph, and a 0% chance of crashing if driven at 70 mph.
- no other speeds can be chosen.
- using RB-RHC allows for replanning with regard to the remaining risk budget instead of the total risk budget. This prevents the algorithm from exceeding the total risk budget and violating the joint chance constraint (4).
- emergency stop may be defined as repeating the maximum deceleration control u stop for at most t stop steps, which is assumed to enter the passively safe (stopped) belief state set stop ⁇ from any belief state.
- the passively safe belief states are belief states with deterministic zero velocity. Since the passively safe states are defined to not belong to coll , the collision chance in the passively safe belief states is zero,
- RB-RHC may be formulated based on the discussion above. Similar to joint chance-constrained RHC (JCC-RHC), RB-RHC has an inner open-loop planning problem and an outer re-planning loop.
- the open-loop planning problem that is solved by RB-RHC at step k is,
- An RB-RHC algorithm is detailed in Algorithm 1, illustrated in FIG. 4 . From lines 3-4, it recursively solves the open-loop planning problem and executes the first control. If (12) does not have a solution, the ego vehicle must already be in a dangerous belief state. Therefore, from lines 7-10, this approach chooses to take the most conservative control u stop (emergency stop) or NO-OP (remain stopped) depending on whether the ego vehicle is moving or not. After the action is executed, the rolling risk budget is increased by the average risk taken per step ⁇ in line 12.
- RB-RHC includes the following features as shown in line 5 of Algorithm 1: whenever a new control u* k ⁇ u stop , NO-OP ⁇ will be executed, the rolling risk budget ⁇ will be decreased.
- the risk budget is considered a limited resource.
- Line 5 keeps track of the rolling risk budget by subtracting out the risk usage caused by u* k : g b ( ⁇ b OL (b, u* k ))+g b stop ( ⁇ b OL (b, u* k )).
- the first term g b ( ⁇ b OL (b, u* k )) is the risk of collision at step k+1 after executing u* k .
- the second term g b stop ( ⁇ b OL (b, u* k )) bounded the risk of collision during emergency stop if (12) does not have a solution at step k+1 after u* k is executed.
- the RB-RHC also plans the risk budget usage ahead. There is a trade-off between taking risk now or taking risk later.
- An aggressive control u* k might collect less cost in (12a), but it might as well use up most of the rolling risk budget.
- the risk usage may be heuristically modeled at step i to be g b (b i )+g b stop (b i ) in (12c).
- step k the probability of collision in the remaining steps is bounded by the sum of probability of collision caused by three conditions
- RB-RHC will execute a sequence of u stop followed by a sequence of NO-OP until (12) is solvable again at step l with the rolling risk budget ⁇ l .
- the probability of collision for this condition therefore includes,
- Lemma 2 proves the base step and Lemma 1 proves the induction step. Therefore, by mathematical induction, the theorem is proven.
- Example 2 demonstrates the safety and non-conservativeness of the example RB-RHC algorithm in two uncertain, dynamic simulation examples.
- all dynamic obstacles are vehicles.
- PCL-RHC partially closed loop RHC
- Example 2 RB-RHC is shown to be less conservative than JCC-RHC in a complex environment.
- RB-RHC is also applied to an autonomous yard truck with a 2.5 m by 15 m trailer, to show the applicability of RB-RHC in safety critical environments.
- a kinematically feasible reference path is assumed to be given. The reference path is collision-free to all static obstacles.
- RR-GP provides a mixture of Gaussian predictions that satisfy dynamic and environmental constraints.
- the collision chance g b (b) is the sum of collision chance between the ego vehicle and each dynamic obstacle.
- the belief state b i can be fully determined by the spatial-temporal state of the ego vehicle ⁇ i, s i ⁇ without the control history u k:i .
- the belief state of dynamic obstacles is fully determined by the RR-GP model and is irrelevant to the control history.
- the state of the ego vehicle s on the reference path can further be determined by the distance ⁇ and velocity v of the ego vehicle on the reference path.
- a cost is also assigned to each edge using c ⁇ (b N ) or c(b i , u i ).
- (12) can be solved by relaxing the edges in the graph in the inverse temporal order.
- (12c) is a path constraint for the graph search problem.
- One may convert the constrained graph search to an unconstrained one by introducing the Lagrange multiplier ⁇ .
- c(b i , u i ) as intermediate edge costs, one may use c(b i , u i )+ ⁇ [g b (b i ) g b stop (b i )] for the augmented unconstrained graph search.
- Starting with a large ⁇ interval [ ⁇ L , ⁇ U ] one may iteratively solve the augmented unconstrained graph search with the
- the ego vehicle 502 meets a left turning dynamic obstacle 504 in a T-junction. Both vehicles are in bus-like rectangular shape: 12.6 meters in length, 2.4 meters in width.
- the ego vehicle kinematic model is assumed to be the deterministic bicycle model. However, the ego vehicle 502 is still uncertain about the future states of the dynamic obstacle.
- the states of both vehicles are four dimensional: [x, y, ⁇ , v], which denotes the x, y coordinate, the heading angle, and the velocity.
- the control is two dimensional: [a, ⁇ ], which denotes the acceleration and the steering angle.
- the velocity, acceleration, and steering angle of the ego vehicle are constrained by
- the planning time step is 1 sec and re-planning frequency is 1 Hz.
- the cost function l(x i , u i ) is 1.0+0.5 ⁇ a i 2 .
- This example drives the dynamic obstacle 504 ten times with the same left turning motion pattern and uses the data to formulate the RR-GP prediction model for the dynamic obstacle 504 .
- a total of 1000 different initial conditions were tested on JCC finite horizon (JCC-FH), JCC-RHC, PCL-RHC, and RB-RHC.
- Each initial condition includes the initial state of the ego vehicle 502 and the future states of the dynamic obstacle that is pre-sampled from the same distribution as the RR-GP prediction model.
- Table 1 below presents the averaged results over the 1000 initial conditions for each algorithm.
- JCC-RHC failure rate in JCC-RHC is considerably less than ⁇ 0 because JCC-RHC is often practically willing to take less risk due to its conservative nature. Note that in theory, JCC-RHC can violate the overall chance constraint as shown in the Racing Counterexample discussed above, when multiple risky events are introduced but this example includes only one risky event. Additionally, FIG. 6 shows the remaining risk budget throughout time with all trials. Both JCC-RHC and PCL-RHC overdraft the overall risk budget ⁇ 0 but not the RB-RHC.
- FIG. 7 shows that RB-RHC is practically less conservative than JCC-RHC in a simulated busy yard environment.
- the ego vehicle 702 wants to turn left while avoiding forward driving dynamic obstacle 704 in the front and dynamic obstacle 706 from the other side with uncertain intention. It is known in advance that the dynamic obstacle 706 has a 70% probability of turning left and a 30% probability of going straight, but its intention isn't directly observable. All vehicles are 5.0 meters in length, 2.5 meters in width tractors with 12.6 meters in length, 2.4 meters in width trailers.
- the ego vehicle kinematic model is assumed to be the deterministic tractor-trailer model.
- the states of all three vehicles 702 , 704 , 706 are five dimensional: [x, y, ⁇ , v, ⁇ ], which denotes the x, y coordinate, the tractor heading angle, the velocity, and the trailer heading angle.
- the control is two dimensional: [a, ⁇ ], which denotes the tractor acceleration and the tractor steering angle.
- Table 2 below presents the averaged results over the 1000 initial conditions for each algorithm.
- FIG. 8 shows the average distances between the ego vehicle and the dynamic obstacles at each time. While the ego vehicle remains a safe distance to dynamic obstacles using all algorithms, JCC-RHC conservatively maintains a further distance to the dynamic obstacles compared to RB-RHC and PCL-RHC.
- This example (not illustrated) demonstrates the applicability of RB-RHC in real-world safety critical environments, by testing on an autonomous class 8 truck.
- the autonomous truck is operated with a maximum speed of 1.15 m/s 2 and with a safety driver.
- Virtual static obstacles resemble the parked trailers in the yard and virtual dynamic obstacles resemble the other human driven yard trucks.
- the RB-RHC planner in this example replans in 20 Hz in C++ and executes the plan in the controller in MATLAB. Due to the heaviness of the truck, the controller is unable to track the planned speed profile well. Therefore, this example models the controller tracking error as ego vehicle kinematic model uncertainty within RB-RHC planner unlike the simulation examples.
- the truck remains a safe but not far distance to the virtual dynamic obstacles.
- a system includes one or more devices, including one or more hardware processors, that are configured to perform any of the operations described herein and/or recited in any of the claims.
- one or more non-transitory computer-readable storage media store instructions that, when executed by one or more hardware processors, cause performance of any of the operations described herein and/or recited in any of the claims.
- techniques described herein are implemented by one or more special-purpose computing devices (i.e., computing devices specially configured to perform certain functionality).
- the special-purpose computing device(s) may be hard-wired to perform the techniques and/or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and/or network processing units (NPUs) that are persistently programmed to perform the techniques.
- ASICs application-specific integrated circuits
- FPGAs field programmable gate arrays
- NPUs network processing units
- a computing device may include one or more general-purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, and/or other storage.
- a special-purpose computing device may combine custom hard-wired logic, ASICs, FPGAs, or NPUs with custom programming to accomplish the techniques.
- a special-purpose computing device may include a desktop computer system, portable computer system, handheld device, networking device, and/or any other device(s) incorporating hard-wired and/or program logic to implement the techniques.
- FIG. 9 is a block diagram of an example of a computer system 900 according to an embodiment.
- Computer system 900 includes a bus 902 or other communication mechanism for communicating information, and a hardware processor 904 coupled with the bus 902 for processing information.
- Hardware processor 904 may be a general-purpose microprocessor.
- Computer system 900 also includes a main memory 906 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 902 for storing information and instructions to be executed by processor 904 .
- Main memory 906 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 904 .
- Such instructions when stored in one or more non-transitory storage media accessible to processor 904 , render computer system 900 into a special-purpose machine that is customized to perform the operations specified in the instructions.
- Computer system 900 further includes a read only memory (ROM) 908 or other static storage device coupled to bus 902 for storing static information and instructions for processor 904 .
- ROM read only memory
- a storage device 910 such as a magnetic disk or optical disk, is provided and coupled to bus 902 for storing information and instructions.
- Computer system 900 may be coupled via bus 902 to a display 912 , such as a liquid crystal display (LCD), plasma display, electronic ink display, cathode ray tube (CRT) monitor, or any other kind of device for displaying information to a computer user.
- a display 912 such as a liquid crystal display (LCD), plasma display, electronic ink display, cathode ray tube (CRT) monitor, or any other kind of device for displaying information to a computer user.
- An input device 914 may be coupled to bus 902 for communicating information and command selections to processor 904 .
- computer system 900 may receive user input via a cursor control 916 , such as a mouse, a trackball, a trackpad, or cursor direction keys for communicating direction information and command selections to processor 904 and for controlling cursor movement on display 912 .
- a cursor control 916 such as a mouse, a trackball, a trackpad, or cursor direction keys for communicating direction information and command selections to processor 90
- This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
- computer system 9 may include a touchscreen.
- Display 912 may be configured to receive user input via one or more pressure-sensitive sensors, multi-touch sensors, and/or gesture sensors.
- computer system 900 may receive user input via a microphone, video camera, and/or some other kind of user input device (not shown).
- Computer system 900 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware, and/or program logic which in combination with other components of computer system 900 causes or programs computer system 900 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 900 in response to processor 904 executing one or more sequences of one or more instructions contained in main memory 906 . Such instructions may be read into main memory 906 from another storage medium, such as storage device 910 . Execution of the sequences of instructions contained in main memory 906 causes processor 904 to perform the process steps described herein. Alternatively or additionally, hard-wired circuitry may be used in place of or in combination with software instructions.
- Non-volatile media includes, for example, optical or magnetic disks, such as storage device 910 .
- Volatile media includes dynamic memory, such as main memory 906 .
- Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape or other magnetic data storage medium, a CD-ROM or any other optical data storage medium, any physical medium with patterns of holes, a RAM, a programmable read-only memory (PROM), an erasable PROM (EPROM), a FLASH-EPROM, non-volatile random-access memory (NVRAM), any other memory chip or cartridge, content-addressable memory (CAM), and ternary content-addressable memory (TCAM).
- a floppy disk a flexible disk, hard disk, solid state drive, magnetic tape or other magnetic data storage medium
- CD-ROM or any other optical data storage medium any physical medium with patterns of holes
- RAM random access memory
- PROM erasable PROM
- EPROM erasable PROM
- FLASH-EPROM non-volatile random-access memory
- CAM content-addressable memory
- TCAM ternary content-addressable memory
- a storage medium is distinct from but may be used in conjunction with a transmission medium.
- Transmission media participate in transferring information between storage media. Examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that comprise bus 902 . Transmission media may also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
- Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 904 for execution.
- the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer.
- the remote computer may load the instructions into its dynamic memory and send the instructions over a network, via a network interface controller (NIC), such as an Ethernet controller or Wi-Fi controller.
- NIC network interface controller
- a NIC local to computer system 900 may receive the data from the network and place the data on bus 902 .
- Bus 902 carries the data to main memory 906 , from which processor 904 retrieves and executes the instructions.
- the instructions received by main memory 906 may optionally be stored on storage device 910 either before or after execution by processor 904 .
- Computer system 900 also includes a communication interface 918 coupled to bus 902 .
- Communication interface 918 provides a two-way data communication coupling to a network link 920 that is connected to a local network 922 .
- communication interface 918 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
- ISDN integrated services digital network
- communication interface 918 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
- LAN local area network
- Wireless links may also be implemented.
- communication interface 918 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
- Network link 920 typically provides data communication through one or more networks to other data devices.
- network link 920 may provide a connection through local network 922 to a host computer 924 or to data equipment operated by an Internet Service Provider (ISP) 926 .
- ISP 926 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 928 .
- Internet 928 uses electrical, electromagnetic or optical signals that carry digital data streams.
- the signals through the various networks and the signals on network link 920 and through communication interface 918 which carry the digital data to and from computer system 900 , are example forms of transmission media.
- Computer system 900 can send messages and receive data, including program code, through the network(s), network link 920 and communication interface 918 .
- a server 930 might transmit a requested code for an application program through Internet 928 , ISP 926 , local network 922 , and communication interface 918 .
- the received code may be executed by processor 904 as it is received, and/or stored in storage device 910 , or other non-volatile storage for later execution.
- a computer network provides connectivity among a set of nodes running software that utilizes techniques as described herein.
- the nodes may be local to and/or remote from each other.
- the nodes are connected by a set of links. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, an optical fiber, and a virtual link.
- a subset of nodes implements the computer network. Examples of such nodes include a switch, a router, a firewall, and a network address translator (NAT). Another subset of nodes uses the computer network.
- Such nodes may execute a client process and/or a server process.
- a client process makes a request for a computing service (for example, a request to execute a particular application and/or retrieve a particular set of data).
- a server process responds by executing the requested service and/or returning corresponding data.
- a computer network may be a physical network, including physical nodes connected by physical links.
- a physical node is any digital device.
- a physical node may be a function-specific hardware device. Examples of function-specific hardware devices include a hardware switch, a hardware router, a hardware firewall, and a hardware NAT.
- a physical node may be any physical resource that provides compute power to perform a task, such as one that is configured to execute various virtual machines and/or applications performing respective functions.
- a physical link is a physical medium connecting two or more physical nodes. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, and an optical fiber.
- a computer network may be an overlay network.
- An overlay network is a logical network implemented on top of another network (for example, a physical network).
- Each node in an overlay network corresponds to a respective node in the underlying network. Accordingly, each node in an overlay network is associated with both an overlay address (to address the overlay node) and an underlay address (to address the underlay node that implements the overlay node).
- An overlay node may be a digital device and/or a software process (for example, a virtual machine, an application instance, or a thread).
- a link that connects overlay nodes may be implemented as a tunnel through the underlying network. The overlay nodes at either end of the tunnel may treat the underlying multi-hop path between them as a single logical link. Tunneling is performed through encapsulation and decapsulation.
- a client may be local to and/or remote from a computer network.
- the client may access the computer network over other computer networks, such as a private network or the Internet.
- the client may communicate requests to the computer network using a communications protocol, such as Hypertext Transfer Protocol (HTTP).
- HTTP Hypertext Transfer Protocol
- the requests are communicated through an interface, such as a client interface (such as a web browser), a program interface, or an application programming interface (API).
- HTTP Hypertext Transfer Protocol
- the requests are communicated through an interface, such as a client interface (such as a web browser), a program interface, or an application programming interface (API).
- HTTP Hypertext Transfer Protocol
- API application programming interface
- a computer network provides connectivity between clients and network resources.
- Network resources include hardware and/or software configured to execute server processes. Examples of network resources include a processor, a data storage, a virtual machine, a container, and/or a software application.
- Network resources may be shared amongst multiple clients. Clients request computing services from a computer network independently of each other. Network resources are dynamically assigned to the requests and/or clients on an on-demand basis. Network resources assigned to each request and/or client may be scaled up or down based on, for example, (a) the computing services requested by a particular client, (b) the aggregated computing services requested by a particular tenant, and/or (c) the aggregated computing services requested of the computer network.
- Such a computer network may be referred to as a “cloud network.”
- a service provider provides a cloud network to one or more end users.
- Various service models may be implemented by the cloud network, including but not limited to Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS).
- SaaS Software-as-a-Service
- PaaS Platform-as-a-Service
- IaaS Infrastructure-as-a-Service
- SaaS a service provider provides end users the capability to use the service provider's applications, which are executing on the network resources.
- PaaS the service provider provides end users the capability to deploy custom applications onto the network resources.
- the custom applications may be created using programming languages, libraries, services, and tools supported by the service provider.
- IaaS the service provider provides end users the capability to provision processing, storage, networks, and other fundamental computing resources provided by the network resources. Any applications, including an operating system, may be deployed on the network resources.
- various deployment models may be implemented by a computer network, including but not limited to a private cloud, a public cloud, and a hybrid cloud.
- a private cloud network resources are provisioned for exclusive use by a particular group of one or more entities (the term “entity” as used herein refers to a corporation, organization, person, or other entity).
- the network resources may be local to and/or remote from the premises of the particular group of entities.
- a public cloud cloud resources are provisioned for multiple entities that are independent from each other (also referred to as “tenants” or “customers”).
- a computer network includes a private cloud and a public cloud. An interface between the private cloud and the public cloud allows for data and application portability.
- Data stored at the private cloud and data stored at the public cloud may be exchanged through the interface.
- Applications implemented at the private cloud and applications implemented at the public cloud may have dependencies on each other.
- a call from an application at the private cloud to an application at the public cloud (and vice versa) may be executed through the interface.
- a system supports multiple tenants.
- a tenant is a corporation, organization, enterprise, business unit, employee, or other entity that accesses a shared computing resource (for example, a computing resource shared in a public cloud).
- One tenant may be separate from another tenant.
- the computer network and the network resources thereof are accessed by clients corresponding to different tenants.
- Such a computer network may be referred to as a “multi-tenant computer network.”
- Several tenants may use a same particular network resource at different times and/or at the same time.
- the network resources may be local to and/or remote from the premises of the tenants. Different tenants may demand different network requirements for the computer network.
- Examples of network requirements include processing speed, amount of data storage, security requirements, performance requirements, throughput requirements, latency requirements, resiliency requirements, Quality of Service (QoS) requirements, tenant isolation, and/or consistency.
- the same computer network may need to implement different network requirements demanded by different tenants.
- tenant isolation is implemented to ensure that the applications and/or data of different tenants are not shared with each other.
- Various tenant isolation approaches may be used.
- each tenant is associated with a tenant ID.
- Applications implemented by the computer network are tagged with tenant ID's.
- data structures and/or datasets, stored by the computer network are tagged with tenant ID's.
- a tenant is permitted access to a particular application, data structure, and/or dataset only if the tenant and the particular application, data structure, and/or dataset are associated with a same tenant ID.
- each database implemented by a multi-tenant computer network may be tagged with a tenant ID.
- a tenant associated with the corresponding tenant ID may access data of a particular database.
- each entry in a database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular entry.
- the database may be shared by multiple tenants.
- a subscription list may indicate which tenants have authorization to access which applications. For each application, a list of tenant ID's of tenants authorized to access the application is stored. A tenant is permitted access to a particular application only if the tenant ID of the tenant is included in the subscription list corresponding to the particular application.
- network resources such as digital devices, virtual machines, application instances, and threads
- packets from any source device in a tenant overlay network may only be transmitted to other devices within the same tenant overlay network.
- Encapsulation tunnels may be used to prohibit any transmissions from a source device on a tenant overlay network to devices in other tenant overlay networks.
- the packets, received from the source device are encapsulated within an outer packet.
- the outer packet is transmitted from a first encapsulation tunnel endpoint (in communication with the source device in the tenant overlay network) to a second encapsulation tunnel endpoint (in communication with the destination device in the tenant overlay network).
- the second encapsulation tunnel endpoint decapsulates the outer packet to obtain the original packet transmitted by the source device.
- the original packet is transmitted from the second encapsulation tunnel endpoint to the destination device in the same particular overlay network.
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
x i+1=ƒ(x i ,u i ,w i) (1)
y i =h(x i ,v i), (2)
where xi∈⊆ n
For safety-critical applications, the states may be constrained to be outside some collision zone coll⊆. These examples exclude states that are passively safe from coll: if the ego vehicle is static in state x, then x∉ coll and it is passively safe. Since the uncertainty introduced to the system can be unbounded, it cannot be guaranteed that the state constraint is always satisfied. Instead, a joint chance constraint may be applied in the system:
where the upper threshold of the collision probability over the T steps is denoted as α.
The chance-constrained POMDP (CC-POMDP) optimization problem may be formulated as:
such that:
where c(bi, πi) and cƒ(bT) are the step-wise cost function and the final cost function in terms of the belief state:
In an embodiment, the joint chance constraint (5c) enables non-conservative behavior by allowing non-uniform allocation of risk across steps.
For CC-POMDPs, it can be shown that this approximation is safety-preserving: a feasible solution to the CC-UMDP is a feasible—but suboptimal—solution to the original CC-POMDP. Because UMDP is fully open-loop, it is a very conservative approximation; the predicted open-loop belief states can have high covariance (e.g., imagine navigating a dynamic environment with closed eyes). A “closed-loop covariance” may be introduced to account for the future observations; in Partially Closed-Loop Receding Horizon Control (PCL-RHC), the future open-loop belief state bpi, is replaced with the future partially closed-loop belief state bi CL, which assumes that the most likely observation is made after the current step k, such that:
b i+1 PCL(x i+1)=p(x i+1 |b 0 ,u 0:i ,y 1:k ,y k+1,i ML), (9)
where yML is the maximum-likelihood predicted observation. (In this example, a definition of bi CL does not condition on yi ML.) Examples discussed below include an algorithm that can use (9) to plan non-conservatively, while preserving safety guarantees. In addition, the following discussion demonstrates that a baseline that uses (8) produces more conservative behavior.
Pr(y 1=“crash”)=0.1
Pr(y 1=“safe”,y 2=“crash”)=0.9·0.1=0.09
Pr(y 1=“safe”,y 2=“safe”)=0.9·0.9=0.81.
The overall probability of crashing for this RHC policy is 0.1+0.09=0.19. Therefore, RHC with a joint chance constraint of β which allocates α/T risk for each step of the planning horizon does not satisfy the desired chance constraint α=0.1, or the episodic risk bound with ρ0=0, δ=β/N=α/T on the interval [0, T=2], which is δT=α=0.1.
-
- ƒb stop(b, τ) may be defined as a function that returns the belief state after taking ustop for r steps from a belief state b without any observation received. It satisfies ƒb stop(b, τ)=ƒb OL(ƒb stop(b, τ−1), ustop).
- gb(b) may be defined as the probability of collision of a belief state b: gb(b)=b (x).
- gb stop(b) may be defined as the probability of collision during the emergency stop starting from a belief state b: gb stop(b)=Στ=1 t
stop gb(ƒb stop(b, τ))..
such that:
where b is the partially closed-loop belief state defined in (9), which should have covariance less than or equal to bOL, and ρk is the rolling risk budget at step k. The belief update notation ƒ{tilde over (b)} in (12b) denotes that this algorithm can use approximate belief updating heuristics after step k+1 (such as partially closed-loop belief updating or sampling) to reduce the conservatism of purely open-loop belief updating, without sacrificing the safety guarantees that open-loop updating provides.
-
- 1. The probability of collision at step k+1:
Pr(x k+1∈ coll)=g b(ƒb OL(b,u* k)). - 2. the probability of collision from step k+2 to T and (12) is solvable with bk+1 and ρl=k+1 at step k+1:
- 1. The probability of collision at step k+1:
-
-
- where k+1 solv⊆ denotes the set of bk+1 such that (12) is solvable with input belief state bk+1 and rolling risk budget ρl=k+1. The inductive hypothesis is used here so that ρ′+δ(T−k) bounds the probability of collision from step k+2 to T if (12) is solvable at step k+1.
- 3. the probability of collision from step k+2 to T and (12) is unsolvable with bk+1 and ρl=k+1 at step k+1:
-
-
- the probability of collision during emergency stop: gb stop(ƒb OL(b, u*k)).
- the probability of collision in the remaining steps after (12) is solvable again: (ρ′+δ(T−k))·Pr(bk+1∉ k+1 solv).
Pr(b k+1∉ k+1 solv ,U i=k+2 T(x i∈ coll))≤(ρ′+δ(T−k))·Pr(b k+1∉ k+1 solv)+g b stop(ƒb OL(b,u* k))
ρ≥g b(b T)+g b stop(b T)≥g(b T)=Pr(x T∈ coll).
b(d j)=Σk=1 Mϕk (μd
where each mixture represents a motion pattern. For each dynamic obstacle, one may use an RR-GP prediction model as its belief state transition model. RR-GP provides a mixture of Gaussian predictions that satisfy dynamic and environmental constraints. At step k, the RR-GP model for the jth dynamic obstacle provides bi PCL(dj) and bi OL(di) for i=k+1, . . . , N.
0.5+0.5·erf(−d/√{square root over (2(μx
where the first object has radius r1 and centered at x1˜N(μx
If the optimal path satisfies (12c), the interval [λL, λU] for the next iteration will be the lower half of it; otherwise, the upper half is used. This bisection search method will converge to the locally optimal A and the locally optimal solution to (12).
TABLE 1 |
SIMULATION RESULTS, EXAMPLE 1, α0 = 0.01 |
Algorithm | Collision Rate | Navigation Cost | ||
JFF-FH | 0.007 ± 0.003 | 24.678 ± 0.037 | ||
JCC-RHC | 0.001 ± 0.001 | 24.177 ± 0.048 | ||
PCL-RHC | 0.015 ± 0.004 | 23.866 ± 0.086 | ||
RB-RHC | 0.010 ± 0.003 | 24.021 ± 0.081 | ||
In this example, only PCL-RHC violates the overall chance constraint and RB-RHC is shown to practically satisfy the chance constraint. The fact that the failure rate in RB-RHC exactly matches α0 suggests that RB-RHC takes the most risk that it is allowed to in order to collect least cost. The failure rate in JCC-RHC is considerably less than α0 because JCC-RHC is often practically willing to take less risk due to its conservative nature. Note that in theory, JCC-RHC can violate the overall chance constraint as shown in the Racing Counterexample discussed above, when multiple risky events are introduced but this example includes only one risky event. Additionally,
TABLE 2 |
SIMULATION RESULTS, EXAMPLE 2, α0 = 0.00001 |
Algorithm | Collision Rate | Navigation Cost | ||
JCC-FH | 0.0 ± 0.0 | 27.178 ± 0.102 | ||
JCC-RHC | 0.0 ± 0.0 | 20.829 ± 0.070 | ||
PCL-RHC | 0.0 ± 0.0 | 19.203 ± 0.056 | ||
RB-RHC | 0.0 ± 0.0 | 19.157 ± 0.057 | ||
Due to the small α0, one observes no collision in all 1000 trials of all algorithms. In this example, the cost of RB-RHC is 8.5% less than the cost of JCC-RHC. It suggests that RB-RHC is less conservative than JCC-RHC. PCL-RHC and RB-RHC has similar cost and JCC-FH is a lot more costly than all the algorithms with replanning. Additionally,
Claims (45)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/514,197 US11987269B2 (en) | 2020-10-30 | 2021-10-29 | Safe non-conservative planning for autonomous vehicles |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063107958P | 2020-10-30 | 2020-10-30 | |
US17/514,197 US11987269B2 (en) | 2020-10-30 | 2021-10-29 | Safe non-conservative planning for autonomous vehicles |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220135076A1 US20220135076A1 (en) | 2022-05-05 |
US11987269B2 true US11987269B2 (en) | 2024-05-21 |
Family
ID=81379811
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/514,197 Active 2042-05-12 US11987269B2 (en) | 2020-10-30 | 2021-10-29 | Safe non-conservative planning for autonomous vehicles |
Country Status (1)
Country | Link |
---|---|
US (1) | US11987269B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230322270A1 (en) * | 2022-04-08 | 2023-10-12 | Motional Ad Llc | Tracker Position Updates for Vehicle Trajectory Generation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180208195A1 (en) * | 2017-01-20 | 2018-07-26 | Pcms Holdings, Inc. | Collaborative risk controller for vehicles using v2v |
US11441916B1 (en) * | 2016-01-22 | 2022-09-13 | State Farm Mutual Automobile Insurance Company | Autonomous vehicle trip routing |
-
2021
- 2021-10-29 US US17/514,197 patent/US11987269B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11441916B1 (en) * | 2016-01-22 | 2022-09-13 | State Farm Mutual Automobile Insurance Company | Autonomous vehicle trip routing |
US20180208195A1 (en) * | 2017-01-20 | 2018-07-26 | Pcms Holdings, Inc. | Collaborative risk controller for vehicles using v2v |
Also Published As
Publication number | Publication date |
---|---|
US20220135076A1 (en) | 2022-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11847870B2 (en) | Vehicle management system | |
JP7381539B2 (en) | Remote control system and method for trajectory correction of autonomous vehicles | |
US11022974B2 (en) | Sensor-based object-detection optimization for autonomous vehicles | |
Holstein et al. | Ethical and social aspects of self-driving cars | |
CN111357258B (en) | System and method for vehicle application programming interface | |
US20190152490A1 (en) | Object Interaction Prediction Systems and Methods for Autonomous Vehicles | |
US20200241564A1 (en) | Proactive generation of tuning data for autonomous vehicle dispatch | |
US10395441B2 (en) | Vehicle management system | |
US20170192423A1 (en) | System and method for remotely assisting autonomous vehicle operation | |
US20170124781A1 (en) | Calibration for autonomous vehicle operation | |
US20210149404A1 (en) | Systems and Methods for Jointly Performing Perception, Perception, and Motion Planning for an Autonomous System | |
US11891087B2 (en) | Systems and methods for generating behavioral predictions in reaction to autonomous vehicle movement | |
JP2020509963A (en) | Trajectory generation and execution architecture | |
WO2020147361A1 (en) | Method and apparatus used for vehicle control | |
US20180124213A1 (en) | Vehicle Security System | |
US20230125581A1 (en) | Routing autonomous vehicles using temporal data | |
US20220153315A1 (en) | Systems and Methods for Actor Motion Forecasting within a Surrounding Environment of an Autonomous Vehicle | |
US20220153298A1 (en) | Generating Motion Scenarios for Self-Driving Vehicles | |
US20220137615A1 (en) | Systems and Methods for Dynamic Data Buffering for Autonomous Vehicle Remote Assistance | |
JP7139505B2 (en) | Vehicle management system | |
US20220153309A1 (en) | Systems and Methods for Motion Forecasting and Planning for Autonomous Vehicles | |
US11987269B2 (en) | Safe non-conservative planning for autonomous vehicles | |
US20210048823A1 (en) | Latent belief space planning using a trajectory tree | |
Zhu et al. | A decentralized multi-criteria optimization algorithm for multi-unmanned ground vehicles (mugvs) navigation at signal-free intersection | |
US20220105955A1 (en) | Metrics for Evaluating Autonomous Vehicle Performance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
AS | Assignment |
Owner name: ISEE, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BAKER, CHRIS L.;HUANG, HUNG-JUI;ZHAO, YIBIAO;AND OTHERS;SIGNING DATES FROM 20201112 TO 20201130;REEL/FRAME:057990/0294 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |